00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 608 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3270 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.072 Fetching changes from the remote Git repository 00:00:00.074 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.142 > git --version # 'git version 2.39.2' 00:00:00.142 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.181 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.182 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:23.896 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:23.908 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:23.923 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:23.924 > git config core.sparsecheckout # timeout=10 00:00:23.938 > git read-tree -mu HEAD # timeout=10 00:00:23.958 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:23.987 Commit message: "inventory: add WCP3 to free inventory" 00:00:23.987 > git rev-list --no-walk f574307dba849e7d22dd5631ce9e594362bd2ebc # timeout=10 00:00:24.093 [Pipeline] Start of Pipeline 00:00:24.106 [Pipeline] library 00:00:24.107 Loading library shm_lib@master 00:00:24.107 Library shm_lib@master is cached. Copying from home. 00:00:24.124 [Pipeline] node 00:00:24.135 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:24.137 [Pipeline] { 00:00:24.148 [Pipeline] catchError 00:00:24.149 [Pipeline] { 00:00:24.158 [Pipeline] wrap 00:00:24.165 [Pipeline] { 00:00:24.171 [Pipeline] stage 00:00:24.173 [Pipeline] { (Prologue) 00:00:24.362 [Pipeline] sh 00:00:24.645 + logger -p user.info -t JENKINS-CI 00:00:24.661 [Pipeline] echo 00:00:24.662 Node: WFP8 00:00:24.669 [Pipeline] sh 00:00:24.967 [Pipeline] setCustomBuildProperty 00:00:24.980 [Pipeline] echo 00:00:24.982 Cleanup processes 00:00:24.987 [Pipeline] sh 00:00:25.271 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:25.271 820490 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:25.284 [Pipeline] sh 00:00:25.566 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:25.566 ++ grep -v 'sudo pgrep' 00:00:25.566 ++ awk '{print $1}' 00:00:25.566 + sudo kill -9 00:00:25.566 + true 00:00:25.583 [Pipeline] cleanWs 00:00:25.594 [WS-CLEANUP] Deleting project workspace... 00:00:25.594 [WS-CLEANUP] Deferred wipeout is used... 00:00:25.600 [WS-CLEANUP] done 00:00:25.607 [Pipeline] setCustomBuildProperty 00:00:25.621 [Pipeline] sh 00:00:25.902 + sudo git config --global --replace-all safe.directory '*' 00:00:25.981 [Pipeline] httpRequest 00:00:26.013 [Pipeline] echo 00:00:26.015 Sorcerer 10.211.164.101 is alive 00:00:26.023 [Pipeline] httpRequest 00:00:26.027 HttpMethod: GET 00:00:26.028 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:26.028 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:26.044 Response Code: HTTP/1.1 200 OK 00:00:26.045 Success: Status code 200 is in the accepted range: 200,404 00:00:26.045 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:30.463 [Pipeline] sh 00:00:30.746 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:30.763 [Pipeline] httpRequest 00:00:30.789 [Pipeline] echo 00:00:30.792 Sorcerer 10.211.164.101 is alive 00:00:30.802 [Pipeline] httpRequest 00:00:30.807 HttpMethod: GET 00:00:30.808 URL: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:30.809 Sending request to url: http://10.211.164.101/packages/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:00:30.817 Response Code: HTTP/1.1 200 OK 00:00:30.818 Success: Status code 200 is in the accepted range: 200,404 00:00:30.819 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:26.620 [Pipeline] sh 00:01:26.906 + tar --no-same-owner -xf spdk_2728651eeb6994be786e188da61cae84c5bb49ac.tar.gz 00:01:29.489 [Pipeline] sh 00:01:29.772 + git -C spdk log --oneline -n5 00:01:29.772 2728651ee accel: adjust task per ch define name 00:01:29.772 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:29.772 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:29.772 32a79de81 lib/event: add disable_cpumask_locks to spdk_app_opts 00:01:29.772 719d03c6a sock/uring: only register net impl if supported 00:01:29.795 [Pipeline] withCredentials 00:01:29.806 > git --version # timeout=10 00:01:29.818 > git --version # 'git version 2.39.2' 00:01:29.835 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:29.836 [Pipeline] { 00:01:29.846 [Pipeline] retry 00:01:29.848 [Pipeline] { 00:01:29.867 [Pipeline] sh 00:01:30.211 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:30.790 [Pipeline] } 00:01:30.813 [Pipeline] // retry 00:01:30.819 [Pipeline] } 00:01:30.839 [Pipeline] // withCredentials 00:01:30.847 [Pipeline] httpRequest 00:01:30.886 [Pipeline] echo 00:01:30.887 Sorcerer 10.211.164.101 is alive 00:01:30.896 [Pipeline] httpRequest 00:01:30.900 HttpMethod: GET 00:01:30.900 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:30.901 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:30.904 Response Code: HTTP/1.1 200 OK 00:01:30.904 Success: Status code 200 is in the accepted range: 200,404 00:01:30.904 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:36.207 [Pipeline] sh 00:01:36.491 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:37.879 [Pipeline] sh 00:01:38.159 + git -C dpdk log --oneline -n5 00:01:38.159 eeb0605f11 version: 23.11.0 00:01:38.159 238778122a doc: update release notes for 23.11 00:01:38.159 46aa6b3cfc doc: fix description of RSS features 00:01:38.159 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:38.159 7e421ae345 devtools: support skipping forbid rule check 00:01:38.170 [Pipeline] } 00:01:38.185 [Pipeline] // stage 00:01:38.193 [Pipeline] stage 00:01:38.195 [Pipeline] { (Prepare) 00:01:38.214 [Pipeline] writeFile 00:01:38.228 [Pipeline] sh 00:01:38.510 + logger -p user.info -t JENKINS-CI 00:01:38.522 [Pipeline] sh 00:01:38.804 + logger -p user.info -t JENKINS-CI 00:01:38.816 [Pipeline] sh 00:01:39.099 + cat autorun-spdk.conf 00:01:39.099 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.099 SPDK_TEST_NVMF=1 00:01:39.099 SPDK_TEST_NVME_CLI=1 00:01:39.099 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.099 SPDK_TEST_NVMF_NICS=e810 00:01:39.099 SPDK_TEST_VFIOUSER=1 00:01:39.099 SPDK_RUN_UBSAN=1 00:01:39.099 NET_TYPE=phy 00:01:39.099 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:39.099 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.105 RUN_NIGHTLY=1 00:01:39.113 [Pipeline] readFile 00:01:39.150 [Pipeline] withEnv 00:01:39.152 [Pipeline] { 00:01:39.171 [Pipeline] sh 00:01:39.458 + set -ex 00:01:39.458 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:39.458 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.458 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.458 ++ SPDK_TEST_NVMF=1 00:01:39.458 ++ SPDK_TEST_NVME_CLI=1 00:01:39.458 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.458 ++ SPDK_TEST_NVMF_NICS=e810 00:01:39.458 ++ SPDK_TEST_VFIOUSER=1 00:01:39.458 ++ SPDK_RUN_UBSAN=1 00:01:39.458 ++ NET_TYPE=phy 00:01:39.458 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:39.458 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:39.458 ++ RUN_NIGHTLY=1 00:01:39.458 + case $SPDK_TEST_NVMF_NICS in 00:01:39.458 + DRIVERS=ice 00:01:39.458 + [[ tcp == \r\d\m\a ]] 00:01:39.458 + [[ -n ice ]] 00:01:39.458 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:39.458 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:39.458 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:39.458 rmmod: ERROR: Module irdma is not currently loaded 00:01:39.458 rmmod: ERROR: Module i40iw is not currently loaded 00:01:39.458 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:39.458 + true 00:01:39.458 + for D in $DRIVERS 00:01:39.458 + sudo modprobe ice 00:01:39.458 + exit 0 00:01:39.468 [Pipeline] } 00:01:39.488 [Pipeline] // withEnv 00:01:39.494 [Pipeline] } 00:01:39.510 [Pipeline] // stage 00:01:39.518 [Pipeline] catchError 00:01:39.520 [Pipeline] { 00:01:39.534 [Pipeline] timeout 00:01:39.534 Timeout set to expire in 50 min 00:01:39.536 [Pipeline] { 00:01:39.552 [Pipeline] stage 00:01:39.553 [Pipeline] { (Tests) 00:01:39.568 [Pipeline] sh 00:01:39.852 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.852 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.852 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.852 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:39.852 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.852 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:39.852 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:39.852 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:39.852 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:39.852 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:39.852 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:39.852 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:39.852 + source /etc/os-release 00:01:39.852 ++ NAME='Fedora Linux' 00:01:39.852 ++ VERSION='38 (Cloud Edition)' 00:01:39.852 ++ ID=fedora 00:01:39.852 ++ VERSION_ID=38 00:01:39.852 ++ VERSION_CODENAME= 00:01:39.852 ++ PLATFORM_ID=platform:f38 00:01:39.852 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:39.852 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:39.852 ++ LOGO=fedora-logo-icon 00:01:39.852 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:39.852 ++ HOME_URL=https://fedoraproject.org/ 00:01:39.852 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:39.852 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:39.852 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:39.852 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:39.852 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:39.852 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:39.852 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:39.852 ++ SUPPORT_END=2024-05-14 00:01:39.852 ++ VARIANT='Cloud Edition' 00:01:39.852 ++ VARIANT_ID=cloud 00:01:39.852 + uname -a 00:01:39.852 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:39.852 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:42.392 Hugepages 00:01:42.392 node hugesize free / total 00:01:42.392 node0 1048576kB 0 / 0 00:01:42.392 node0 2048kB 0 / 0 00:01:42.392 node1 1048576kB 0 / 0 00:01:42.392 node1 2048kB 0 / 0 00:01:42.392 00:01:42.392 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:42.392 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:42.392 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:42.392 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:42.392 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:42.392 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:42.392 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:42.392 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:42.392 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:42.392 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:42.392 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:42.392 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:42.392 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:42.392 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:42.392 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:42.392 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:42.392 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:42.392 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:42.392 + rm -f /tmp/spdk-ld-path 00:01:42.392 + source autorun-spdk.conf 00:01:42.392 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.392 ++ SPDK_TEST_NVMF=1 00:01:42.392 ++ SPDK_TEST_NVME_CLI=1 00:01:42.392 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.392 ++ SPDK_TEST_NVMF_NICS=e810 00:01:42.392 ++ SPDK_TEST_VFIOUSER=1 00:01:42.392 ++ SPDK_RUN_UBSAN=1 00:01:42.392 ++ NET_TYPE=phy 00:01:42.392 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.392 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.392 ++ RUN_NIGHTLY=1 00:01:42.392 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:42.392 + [[ -n '' ]] 00:01:42.392 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.392 + for M in /var/spdk/build-*-manifest.txt 00:01:42.392 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:42.392 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:42.392 + for M in /var/spdk/build-*-manifest.txt 00:01:42.392 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:42.392 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:42.392 ++ uname 00:01:42.392 + [[ Linux == \L\i\n\u\x ]] 00:01:42.392 + sudo dmesg -T 00:01:42.392 + sudo dmesg --clear 00:01:42.392 + dmesg_pid=821970 00:01:42.392 + [[ Fedora Linux == FreeBSD ]] 00:01:42.392 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:42.392 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:42.392 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:42.392 + [[ -x /usr/src/fio-static/fio ]] 00:01:42.392 + export FIO_BIN=/usr/src/fio-static/fio 00:01:42.392 + FIO_BIN=/usr/src/fio-static/fio 00:01:42.392 + sudo dmesg -Tw 00:01:42.392 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:42.392 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:42.392 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:42.392 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:42.392 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:42.392 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:42.392 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:42.392 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:42.392 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:42.652 Test configuration: 00:01:42.652 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.652 SPDK_TEST_NVMF=1 00:01:42.652 SPDK_TEST_NVME_CLI=1 00:01:42.652 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.652 SPDK_TEST_NVMF_NICS=e810 00:01:42.652 SPDK_TEST_VFIOUSER=1 00:01:42.652 SPDK_RUN_UBSAN=1 00:01:42.652 NET_TYPE=phy 00:01:42.652 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.652 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.652 RUN_NIGHTLY=1 11:51:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:42.652 11:51:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:42.652 11:51:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:42.652 11:51:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:42.652 11:51:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.652 11:51:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.652 11:51:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.652 11:51:32 -- paths/export.sh@5 -- $ export PATH 00:01:42.652 11:51:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:42.652 11:51:32 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:42.652 11:51:32 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:42.652 11:51:32 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721037092.XXXXXX 00:01:42.652 11:51:32 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721037092.9DeUwj 00:01:42.652 11:51:32 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:42.652 11:51:32 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:01:42.652 11:51:32 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.652 11:51:32 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:42.652 11:51:32 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:42.652 11:51:32 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:42.652 11:51:32 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:42.652 11:51:32 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:42.652 11:51:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.652 11:51:32 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:42.652 11:51:32 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:42.652 11:51:32 -- pm/common@17 -- $ local monitor 00:01:42.652 11:51:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.652 11:51:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.652 11:51:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.652 11:51:32 -- pm/common@21 -- $ date +%s 00:01:42.653 11:51:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:42.653 11:51:32 -- pm/common@21 -- $ date +%s 00:01:42.653 11:51:32 -- pm/common@25 -- $ sleep 1 00:01:42.653 11:51:32 -- pm/common@21 -- $ date +%s 00:01:42.653 11:51:32 -- pm/common@21 -- $ date +%s 00:01:42.653 11:51:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721037092 00:01:42.653 11:51:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721037092 00:01:42.653 11:51:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721037092 00:01:42.653 11:51:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721037092 00:01:42.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721037092_collect-vmstat.pm.log 00:01:42.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721037092_collect-cpu-load.pm.log 00:01:42.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721037092_collect-cpu-temp.pm.log 00:01:42.653 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721037092_collect-bmc-pm.bmc.pm.log 00:01:43.591 11:51:33 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:43.591 11:51:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:43.591 11:51:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:43.591 11:51:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.591 11:51:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:43.591 Mon Jul 15 09:51:33 AM UTC 2024 00:01:43.591 11:51:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:43.591 v24.09-pre-206-g2728651ee 00:01:43.591 11:51:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:43.591 11:51:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:43.591 11:51:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:43.591 11:51:33 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:43.591 11:51:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:43.591 11:51:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.591 ************************************ 00:01:43.591 START TEST ubsan 00:01:43.591 ************************************ 00:01:43.591 11:51:33 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:43.591 using ubsan 00:01:43.591 00:01:43.591 real 0m0.000s 00:01:43.591 user 0m0.000s 00:01:43.591 sys 0m0.000s 00:01:43.591 11:51:33 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:43.591 11:51:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:43.591 ************************************ 00:01:43.591 END TEST ubsan 00:01:43.591 ************************************ 00:01:43.851 11:51:33 -- common/autotest_common.sh@1142 -- $ return 0 00:01:43.851 11:51:33 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:43.851 11:51:33 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:43.851 11:51:33 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:43.851 11:51:33 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:43.851 11:51:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:43.851 11:51:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.851 ************************************ 00:01:43.851 START TEST build_native_dpdk 00:01:43.851 ************************************ 00:01:43.851 11:51:33 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:43.851 eeb0605f11 version: 23.11.0 00:01:43.851 238778122a doc: update release notes for 23.11 00:01:43.851 46aa6b3cfc doc: fix description of RSS features 00:01:43.851 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:43.851 7e421ae345 devtools: support skipping forbid rule check 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:43.851 11:51:33 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:43.851 patching file config/rte_config.h 00:01:43.851 Hunk #1 succeeded at 60 (offset 1 line). 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:43.851 11:51:33 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:48.047 The Meson build system 00:01:48.047 Version: 1.3.1 00:01:48.047 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:48.047 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:48.047 Build type: native build 00:01:48.047 Program cat found: YES (/usr/bin/cat) 00:01:48.047 Project name: DPDK 00:01:48.047 Project version: 23.11.0 00:01:48.047 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:48.047 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:48.047 Host machine cpu family: x86_64 00:01:48.047 Host machine cpu: x86_64 00:01:48.047 Message: ## Building in Developer Mode ## 00:01:48.047 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:48.047 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:48.047 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:48.047 Program python3 found: YES (/usr/bin/python3) 00:01:48.047 Program cat found: YES (/usr/bin/cat) 00:01:48.047 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:48.047 Compiler for C supports arguments -march=native: YES 00:01:48.047 Checking for size of "void *" : 8 00:01:48.047 Checking for size of "void *" : 8 (cached) 00:01:48.047 Library m found: YES 00:01:48.047 Library numa found: YES 00:01:48.047 Has header "numaif.h" : YES 00:01:48.047 Library fdt found: NO 00:01:48.047 Library execinfo found: NO 00:01:48.047 Has header "execinfo.h" : YES 00:01:48.047 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:48.047 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:48.047 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:48.047 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:48.047 Run-time dependency openssl found: YES 3.0.9 00:01:48.047 Run-time dependency libpcap found: YES 1.10.4 00:01:48.047 Has header "pcap.h" with dependency libpcap: YES 00:01:48.047 Compiler for C supports arguments -Wcast-qual: YES 00:01:48.047 Compiler for C supports arguments -Wdeprecated: YES 00:01:48.047 Compiler for C supports arguments -Wformat: YES 00:01:48.047 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:48.047 Compiler for C supports arguments -Wformat-security: NO 00:01:48.047 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:48.047 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:48.047 Compiler for C supports arguments -Wnested-externs: YES 00:01:48.047 Compiler for C supports arguments -Wold-style-definition: YES 00:01:48.047 Compiler for C supports arguments -Wpointer-arith: YES 00:01:48.047 Compiler for C supports arguments -Wsign-compare: YES 00:01:48.047 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:48.047 Compiler for C supports arguments -Wundef: YES 00:01:48.047 Compiler for C supports arguments -Wwrite-strings: YES 00:01:48.047 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:48.047 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:48.047 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:48.047 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:48.047 Program objdump found: YES (/usr/bin/objdump) 00:01:48.047 Compiler for C supports arguments -mavx512f: YES 00:01:48.047 Checking if "AVX512 checking" compiles: YES 00:01:48.047 Fetching value of define "__SSE4_2__" : 1 00:01:48.047 Fetching value of define "__AES__" : 1 00:01:48.047 Fetching value of define "__AVX__" : 1 00:01:48.047 Fetching value of define "__AVX2__" : 1 00:01:48.047 Fetching value of define "__AVX512BW__" : 1 00:01:48.047 Fetching value of define "__AVX512CD__" : 1 00:01:48.047 Fetching value of define "__AVX512DQ__" : 1 00:01:48.047 Fetching value of define "__AVX512F__" : 1 00:01:48.047 Fetching value of define "__AVX512VL__" : 1 00:01:48.047 Fetching value of define "__PCLMUL__" : 1 00:01:48.047 Fetching value of define "__RDRND__" : 1 00:01:48.047 Fetching value of define "__RDSEED__" : 1 00:01:48.047 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:48.047 Fetching value of define "__znver1__" : (undefined) 00:01:48.047 Fetching value of define "__znver2__" : (undefined) 00:01:48.047 Fetching value of define "__znver3__" : (undefined) 00:01:48.047 Fetching value of define "__znver4__" : (undefined) 00:01:48.047 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:48.047 Message: lib/log: Defining dependency "log" 00:01:48.047 Message: lib/kvargs: Defining dependency "kvargs" 00:01:48.047 Message: lib/telemetry: Defining dependency "telemetry" 00:01:48.047 Checking for function "getentropy" : NO 00:01:48.047 Message: lib/eal: Defining dependency "eal" 00:01:48.047 Message: lib/ring: Defining dependency "ring" 00:01:48.047 Message: lib/rcu: Defining dependency "rcu" 00:01:48.047 Message: lib/mempool: Defining dependency "mempool" 00:01:48.047 Message: lib/mbuf: Defining dependency "mbuf" 00:01:48.047 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.047 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:48.047 Compiler for C supports arguments -mpclmul: YES 00:01:48.047 Compiler for C supports arguments -maes: YES 00:01:48.047 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.047 Compiler for C supports arguments -mavx512bw: YES 00:01:48.047 Compiler for C supports arguments -mavx512dq: YES 00:01:48.047 Compiler for C supports arguments -mavx512vl: YES 00:01:48.047 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:48.047 Compiler for C supports arguments -mavx2: YES 00:01:48.047 Compiler for C supports arguments -mavx: YES 00:01:48.047 Message: lib/net: Defining dependency "net" 00:01:48.047 Message: lib/meter: Defining dependency "meter" 00:01:48.047 Message: lib/ethdev: Defining dependency "ethdev" 00:01:48.047 Message: lib/pci: Defining dependency "pci" 00:01:48.047 Message: lib/cmdline: Defining dependency "cmdline" 00:01:48.047 Message: lib/metrics: Defining dependency "metrics" 00:01:48.047 Message: lib/hash: Defining dependency "hash" 00:01:48.047 Message: lib/timer: Defining dependency "timer" 00:01:48.047 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.047 Message: lib/acl: Defining dependency "acl" 00:01:48.047 Message: lib/bbdev: Defining dependency "bbdev" 00:01:48.047 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:48.047 Run-time dependency libelf found: YES 0.190 00:01:48.047 Message: lib/bpf: Defining dependency "bpf" 00:01:48.047 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:48.047 Message: lib/compressdev: Defining dependency "compressdev" 00:01:48.047 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:48.047 Message: lib/distributor: Defining dependency "distributor" 00:01:48.047 Message: lib/dmadev: Defining dependency "dmadev" 00:01:48.047 Message: lib/efd: Defining dependency "efd" 00:01:48.047 Message: lib/eventdev: Defining dependency "eventdev" 00:01:48.047 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:48.047 Message: lib/gpudev: Defining dependency "gpudev" 00:01:48.047 Message: lib/gro: Defining dependency "gro" 00:01:48.047 Message: lib/gso: Defining dependency "gso" 00:01:48.047 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:48.047 Message: lib/jobstats: Defining dependency "jobstats" 00:01:48.047 Message: lib/latencystats: Defining dependency "latencystats" 00:01:48.047 Message: lib/lpm: Defining dependency "lpm" 00:01:48.047 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:48.047 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:48.047 Message: lib/member: Defining dependency "member" 00:01:48.047 Message: lib/pcapng: Defining dependency "pcapng" 00:01:48.047 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:48.047 Message: lib/power: Defining dependency "power" 00:01:48.047 Message: lib/rawdev: Defining dependency "rawdev" 00:01:48.047 Message: lib/regexdev: Defining dependency "regexdev" 00:01:48.047 Message: lib/mldev: Defining dependency "mldev" 00:01:48.047 Message: lib/rib: Defining dependency "rib" 00:01:48.047 Message: lib/reorder: Defining dependency "reorder" 00:01:48.047 Message: lib/sched: Defining dependency "sched" 00:01:48.047 Message: lib/security: Defining dependency "security" 00:01:48.047 Message: lib/stack: Defining dependency "stack" 00:01:48.047 Has header "linux/userfaultfd.h" : YES 00:01:48.047 Has header "linux/vduse.h" : YES 00:01:48.047 Message: lib/vhost: Defining dependency "vhost" 00:01:48.047 Message: lib/ipsec: Defining dependency "ipsec" 00:01:48.047 Message: lib/pdcp: Defining dependency "pdcp" 00:01:48.047 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:48.047 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:48.047 Message: lib/fib: Defining dependency "fib" 00:01:48.047 Message: lib/port: Defining dependency "port" 00:01:48.047 Message: lib/pdump: Defining dependency "pdump" 00:01:48.047 Message: lib/table: Defining dependency "table" 00:01:48.047 Message: lib/pipeline: Defining dependency "pipeline" 00:01:48.047 Message: lib/graph: Defining dependency "graph" 00:01:48.047 Message: lib/node: Defining dependency "node" 00:01:48.047 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:49.424 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:49.424 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:49.424 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:49.424 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:49.424 Compiler for C supports arguments -Wno-unused-value: YES 00:01:49.424 Compiler for C supports arguments -Wno-format: YES 00:01:49.424 Compiler for C supports arguments -Wno-format-security: YES 00:01:49.424 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:49.424 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:49.424 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:49.424 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:49.424 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:49.424 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:49.424 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:49.424 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:49.424 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:49.424 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:49.425 Has header "sys/epoll.h" : YES 00:01:49.425 Program doxygen found: YES (/usr/bin/doxygen) 00:01:49.425 Configuring doxy-api-html.conf using configuration 00:01:49.425 Configuring doxy-api-man.conf using configuration 00:01:49.425 Program mandb found: YES (/usr/bin/mandb) 00:01:49.425 Program sphinx-build found: NO 00:01:49.425 Configuring rte_build_config.h using configuration 00:01:49.425 Message: 00:01:49.425 ================= 00:01:49.425 Applications Enabled 00:01:49.425 ================= 00:01:49.425 00:01:49.425 apps: 00:01:49.425 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:49.425 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:49.425 test-pmd, test-regex, test-sad, test-security-perf, 00:01:49.425 00:01:49.425 Message: 00:01:49.425 ================= 00:01:49.425 Libraries Enabled 00:01:49.425 ================= 00:01:49.425 00:01:49.425 libs: 00:01:49.425 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:49.425 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:49.425 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:49.425 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:49.425 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:49.425 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:49.425 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:49.425 00:01:49.425 00:01:49.425 Message: 00:01:49.425 =============== 00:01:49.425 Drivers Enabled 00:01:49.425 =============== 00:01:49.425 00:01:49.425 common: 00:01:49.425 00:01:49.425 bus: 00:01:49.425 pci, vdev, 00:01:49.425 mempool: 00:01:49.425 ring, 00:01:49.425 dma: 00:01:49.425 00:01:49.425 net: 00:01:49.425 i40e, 00:01:49.425 raw: 00:01:49.425 00:01:49.425 crypto: 00:01:49.425 00:01:49.425 compress: 00:01:49.425 00:01:49.425 regex: 00:01:49.425 00:01:49.425 ml: 00:01:49.425 00:01:49.425 vdpa: 00:01:49.425 00:01:49.425 event: 00:01:49.425 00:01:49.425 baseband: 00:01:49.425 00:01:49.425 gpu: 00:01:49.425 00:01:49.425 00:01:49.425 Message: 00:01:49.425 ================= 00:01:49.425 Content Skipped 00:01:49.425 ================= 00:01:49.425 00:01:49.425 apps: 00:01:49.425 00:01:49.425 libs: 00:01:49.425 00:01:49.425 drivers: 00:01:49.425 common/cpt: not in enabled drivers build config 00:01:49.425 common/dpaax: not in enabled drivers build config 00:01:49.425 common/iavf: not in enabled drivers build config 00:01:49.425 common/idpf: not in enabled drivers build config 00:01:49.425 common/mvep: not in enabled drivers build config 00:01:49.425 common/octeontx: not in enabled drivers build config 00:01:49.425 bus/auxiliary: not in enabled drivers build config 00:01:49.425 bus/cdx: not in enabled drivers build config 00:01:49.425 bus/dpaa: not in enabled drivers build config 00:01:49.425 bus/fslmc: not in enabled drivers build config 00:01:49.425 bus/ifpga: not in enabled drivers build config 00:01:49.425 bus/platform: not in enabled drivers build config 00:01:49.425 bus/vmbus: not in enabled drivers build config 00:01:49.425 common/cnxk: not in enabled drivers build config 00:01:49.425 common/mlx5: not in enabled drivers build config 00:01:49.425 common/nfp: not in enabled drivers build config 00:01:49.425 common/qat: not in enabled drivers build config 00:01:49.425 common/sfc_efx: not in enabled drivers build config 00:01:49.425 mempool/bucket: not in enabled drivers build config 00:01:49.425 mempool/cnxk: not in enabled drivers build config 00:01:49.425 mempool/dpaa: not in enabled drivers build config 00:01:49.425 mempool/dpaa2: not in enabled drivers build config 00:01:49.425 mempool/octeontx: not in enabled drivers build config 00:01:49.425 mempool/stack: not in enabled drivers build config 00:01:49.425 dma/cnxk: not in enabled drivers build config 00:01:49.425 dma/dpaa: not in enabled drivers build config 00:01:49.425 dma/dpaa2: not in enabled drivers build config 00:01:49.425 dma/hisilicon: not in enabled drivers build config 00:01:49.425 dma/idxd: not in enabled drivers build config 00:01:49.425 dma/ioat: not in enabled drivers build config 00:01:49.425 dma/skeleton: not in enabled drivers build config 00:01:49.425 net/af_packet: not in enabled drivers build config 00:01:49.425 net/af_xdp: not in enabled drivers build config 00:01:49.425 net/ark: not in enabled drivers build config 00:01:49.425 net/atlantic: not in enabled drivers build config 00:01:49.425 net/avp: not in enabled drivers build config 00:01:49.425 net/axgbe: not in enabled drivers build config 00:01:49.425 net/bnx2x: not in enabled drivers build config 00:01:49.425 net/bnxt: not in enabled drivers build config 00:01:49.425 net/bonding: not in enabled drivers build config 00:01:49.425 net/cnxk: not in enabled drivers build config 00:01:49.425 net/cpfl: not in enabled drivers build config 00:01:49.425 net/cxgbe: not in enabled drivers build config 00:01:49.425 net/dpaa: not in enabled drivers build config 00:01:49.425 net/dpaa2: not in enabled drivers build config 00:01:49.425 net/e1000: not in enabled drivers build config 00:01:49.425 net/ena: not in enabled drivers build config 00:01:49.425 net/enetc: not in enabled drivers build config 00:01:49.425 net/enetfec: not in enabled drivers build config 00:01:49.425 net/enic: not in enabled drivers build config 00:01:49.425 net/failsafe: not in enabled drivers build config 00:01:49.425 net/fm10k: not in enabled drivers build config 00:01:49.425 net/gve: not in enabled drivers build config 00:01:49.425 net/hinic: not in enabled drivers build config 00:01:49.425 net/hns3: not in enabled drivers build config 00:01:49.425 net/iavf: not in enabled drivers build config 00:01:49.425 net/ice: not in enabled drivers build config 00:01:49.425 net/idpf: not in enabled drivers build config 00:01:49.425 net/igc: not in enabled drivers build config 00:01:49.425 net/ionic: not in enabled drivers build config 00:01:49.425 net/ipn3ke: not in enabled drivers build config 00:01:49.425 net/ixgbe: not in enabled drivers build config 00:01:49.425 net/mana: not in enabled drivers build config 00:01:49.425 net/memif: not in enabled drivers build config 00:01:49.425 net/mlx4: not in enabled drivers build config 00:01:49.425 net/mlx5: not in enabled drivers build config 00:01:49.425 net/mvneta: not in enabled drivers build config 00:01:49.425 net/mvpp2: not in enabled drivers build config 00:01:49.425 net/netvsc: not in enabled drivers build config 00:01:49.425 net/nfb: not in enabled drivers build config 00:01:49.425 net/nfp: not in enabled drivers build config 00:01:49.425 net/ngbe: not in enabled drivers build config 00:01:49.425 net/null: not in enabled drivers build config 00:01:49.425 net/octeontx: not in enabled drivers build config 00:01:49.425 net/octeon_ep: not in enabled drivers build config 00:01:49.425 net/pcap: not in enabled drivers build config 00:01:49.425 net/pfe: not in enabled drivers build config 00:01:49.425 net/qede: not in enabled drivers build config 00:01:49.425 net/ring: not in enabled drivers build config 00:01:49.425 net/sfc: not in enabled drivers build config 00:01:49.425 net/softnic: not in enabled drivers build config 00:01:49.425 net/tap: not in enabled drivers build config 00:01:49.425 net/thunderx: not in enabled drivers build config 00:01:49.425 net/txgbe: not in enabled drivers build config 00:01:49.425 net/vdev_netvsc: not in enabled drivers build config 00:01:49.425 net/vhost: not in enabled drivers build config 00:01:49.425 net/virtio: not in enabled drivers build config 00:01:49.425 net/vmxnet3: not in enabled drivers build config 00:01:49.425 raw/cnxk_bphy: not in enabled drivers build config 00:01:49.425 raw/cnxk_gpio: not in enabled drivers build config 00:01:49.425 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:49.425 raw/ifpga: not in enabled drivers build config 00:01:49.425 raw/ntb: not in enabled drivers build config 00:01:49.425 raw/skeleton: not in enabled drivers build config 00:01:49.425 crypto/armv8: not in enabled drivers build config 00:01:49.425 crypto/bcmfs: not in enabled drivers build config 00:01:49.425 crypto/caam_jr: not in enabled drivers build config 00:01:49.425 crypto/ccp: not in enabled drivers build config 00:01:49.425 crypto/cnxk: not in enabled drivers build config 00:01:49.425 crypto/dpaa_sec: not in enabled drivers build config 00:01:49.425 crypto/dpaa2_sec: not in enabled drivers build config 00:01:49.425 crypto/ipsec_mb: not in enabled drivers build config 00:01:49.425 crypto/mlx5: not in enabled drivers build config 00:01:49.425 crypto/mvsam: not in enabled drivers build config 00:01:49.425 crypto/nitrox: not in enabled drivers build config 00:01:49.425 crypto/null: not in enabled drivers build config 00:01:49.425 crypto/octeontx: not in enabled drivers build config 00:01:49.425 crypto/openssl: not in enabled drivers build config 00:01:49.425 crypto/scheduler: not in enabled drivers build config 00:01:49.425 crypto/uadk: not in enabled drivers build config 00:01:49.425 crypto/virtio: not in enabled drivers build config 00:01:49.425 compress/isal: not in enabled drivers build config 00:01:49.425 compress/mlx5: not in enabled drivers build config 00:01:49.425 compress/octeontx: not in enabled drivers build config 00:01:49.425 compress/zlib: not in enabled drivers build config 00:01:49.425 regex/mlx5: not in enabled drivers build config 00:01:49.425 regex/cn9k: not in enabled drivers build config 00:01:49.425 ml/cnxk: not in enabled drivers build config 00:01:49.425 vdpa/ifc: not in enabled drivers build config 00:01:49.425 vdpa/mlx5: not in enabled drivers build config 00:01:49.425 vdpa/nfp: not in enabled drivers build config 00:01:49.425 vdpa/sfc: not in enabled drivers build config 00:01:49.425 event/cnxk: not in enabled drivers build config 00:01:49.425 event/dlb2: not in enabled drivers build config 00:01:49.425 event/dpaa: not in enabled drivers build config 00:01:49.425 event/dpaa2: not in enabled drivers build config 00:01:49.425 event/dsw: not in enabled drivers build config 00:01:49.425 event/opdl: not in enabled drivers build config 00:01:49.425 event/skeleton: not in enabled drivers build config 00:01:49.425 event/sw: not in enabled drivers build config 00:01:49.425 event/octeontx: not in enabled drivers build config 00:01:49.425 baseband/acc: not in enabled drivers build config 00:01:49.425 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:49.425 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:49.425 baseband/la12xx: not in enabled drivers build config 00:01:49.425 baseband/null: not in enabled drivers build config 00:01:49.425 baseband/turbo_sw: not in enabled drivers build config 00:01:49.425 gpu/cuda: not in enabled drivers build config 00:01:49.425 00:01:49.425 00:01:49.425 Build targets in project: 217 00:01:49.425 00:01:49.425 DPDK 23.11.0 00:01:49.425 00:01:49.425 User defined options 00:01:49.425 libdir : lib 00:01:49.425 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:49.425 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:49.425 c_link_args : 00:01:49.425 enable_docs : false 00:01:49.425 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:49.425 enable_kmods : false 00:01:49.425 machine : native 00:01:49.425 tests : false 00:01:49.425 00:01:49.425 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.425 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:49.425 11:51:39 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:01:49.425 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:49.686 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:49.686 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:49.686 [3/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:49.686 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:49.686 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:49.686 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:49.686 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:49.686 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.686 [9/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:49.686 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:49.686 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:49.686 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:49.686 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:49.686 [14/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.946 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:49.946 [16/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:49.946 [17/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:49.946 [18/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.946 [19/707] Linking static target lib/librte_kvargs.a 00:01:49.946 [20/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:49.946 [21/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:49.946 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.946 [23/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:49.946 [24/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:49.946 [25/707] Linking static target lib/librte_pci.a 00:01:49.946 [26/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:49.946 [27/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:49.946 [28/707] Linking static target lib/librte_log.a 00:01:49.946 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:49.946 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:49.946 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:49.946 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:49.946 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:49.946 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:49.946 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.209 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.209 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:50.209 [38/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.209 [39/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.209 [40/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:50.209 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:50.209 [42/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:50.209 [43/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:50.209 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:50.209 [45/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:50.209 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:50.209 [47/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:50.209 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:50.209 [49/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:50.209 [50/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:50.209 [51/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:50.209 [52/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:50.209 [53/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.209 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:50.467 [55/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:50.467 [56/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:50.467 [57/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:50.467 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:50.467 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.467 [60/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:50.467 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:50.467 [62/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:50.467 [63/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:50.467 [64/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:50.467 [65/707] Linking static target lib/librte_meter.a 00:01:50.467 [66/707] Linking static target lib/librte_ring.a 00:01:50.467 [67/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:50.467 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:50.467 [69/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:50.467 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.467 [71/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.467 [72/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:50.467 [73/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:50.467 [74/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.467 [75/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:50.467 [76/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:50.467 [77/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:50.467 [78/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.467 [79/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:50.467 [80/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:50.467 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:50.467 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:50.467 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.467 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.467 [85/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:50.467 [86/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:50.467 [87/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:50.467 [88/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:50.467 [89/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:50.467 [90/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:50.467 [91/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.467 [92/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:50.467 [93/707] Linking static target lib/librte_cmdline.a 00:01:50.467 [94/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.467 [95/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.467 [96/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:50.468 [97/707] Linking static target lib/librte_net.a 00:01:50.468 [98/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:50.468 [99/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:50.468 [100/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:50.468 [101/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:50.732 [102/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:50.732 [103/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:50.732 [104/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:50.732 [105/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:50.732 [106/707] Linking static target lib/librte_metrics.a 00:01:50.732 [107/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:50.732 [108/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.732 [109/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.732 [110/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:50.732 [111/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.732 [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:50.732 [113/707] Linking target lib/librte_log.so.24.0 00:01:50.732 [114/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.732 [115/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:50.732 [116/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:50.732 [117/707] Linking static target lib/librte_cfgfile.a 00:01:50.732 [118/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.732 [119/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:50.732 [120/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:50.732 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:50.732 [122/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:50.732 [123/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.001 [124/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:51.001 [125/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.001 [126/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:51.001 [127/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.001 [128/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.001 [129/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:51.001 [130/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:51.001 [131/707] Linking target lib/librte_kvargs.so.24.0 00:01:51.001 [132/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:51.001 [133/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.001 [134/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:51.001 [135/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:51.001 [136/707] Linking static target lib/librte_bitratestats.a 00:01:51.001 [137/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.001 [138/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:51.001 [139/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:51.001 [140/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:51.001 [141/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.001 [142/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.001 [143/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.001 [144/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:51.001 [145/707] Linking static target lib/librte_mempool.a 00:01:51.001 [146/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.001 [147/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:51.001 [148/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:51.001 [149/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:51.001 [150/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.001 [151/707] Linking static target lib/librte_timer.a 00:01:51.001 [152/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.263 [153/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:51.263 [154/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.263 [155/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:51.263 [156/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:51.263 [157/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:51.263 [158/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:51.263 [159/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:51.263 [160/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:51.263 [161/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:51.263 [162/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:51.263 [163/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:51.263 [164/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.263 [165/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:51.263 [166/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:51.263 [167/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:51.263 [168/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.263 [169/707] Linking static target lib/librte_compressdev.a 00:01:51.263 [170/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:51.263 [171/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:51.263 [172/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:51.263 [173/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:51.263 [174/707] Linking static target lib/librte_jobstats.a 00:01:51.263 [175/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:51.263 [176/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:51.263 [177/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:51.263 [178/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:51.263 [179/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:51.263 [180/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:51.263 [181/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.263 [182/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:51.263 [183/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.526 [184/707] Linking static target lib/librte_eal.a 00:01:51.526 [185/707] Linking static target lib/librte_rcu.a 00:01:51.526 [186/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:51.526 [187/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.526 [188/707] Linking static target lib/librte_dispatcher.a 00:01:51.526 [189/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:51.526 [190/707] Linking static target lib/librte_bbdev.a 00:01:51.526 [191/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.526 [192/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:51.526 [193/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:51.526 [194/707] Linking static target lib/librte_telemetry.a 00:01:51.526 [195/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:51.526 [196/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:51.526 [197/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:51.526 [198/707] Linking static target lib/librte_gro.a 00:01:51.526 [199/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:51.526 [200/707] Linking static target lib/librte_gpudev.a 00:01:51.526 [201/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:51.526 [202/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:51.526 [203/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:51.526 [204/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:51.526 [205/707] Linking static target lib/librte_latencystats.a 00:01:51.526 [206/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:51.526 [207/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:51.526 [208/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:51.526 [209/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:51.526 [210/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:51.526 [211/707] Linking static target lib/librte_gso.a 00:01:51.526 [212/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:51.526 [213/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:51.526 [214/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:51.526 [215/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:51.526 [216/707] Linking static target lib/librte_dmadev.a 00:01:51.526 [217/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:51.526 [218/707] Linking static target lib/librte_distributor.a 00:01:51.526 [219/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:51.526 [220/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:51.526 [221/707] Linking static target lib/librte_mbuf.a 00:01:51.526 [222/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.526 [223/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.787 [224/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:51.787 [225/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:51.787 [226/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:51.787 [227/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:51.787 [228/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:51.787 [229/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.787 [230/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:51.787 [231/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:51.787 [232/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:51.787 [233/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:51.787 [234/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:51.787 [235/707] Linking static target lib/librte_ip_frag.a 00:01:51.787 [236/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:51.787 [237/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:51.787 [238/707] Linking static target lib/librte_stack.a 00:01:51.787 [239/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:51.787 [240/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.787 [241/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:51.787 [242/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:51.787 [243/707] Linking static target lib/librte_regexdev.a 00:01:51.787 [244/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.787 [245/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:51.787 [246/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.787 [247/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:51.787 [248/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.787 [249/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.787 [250/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:51.787 [251/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:52.046 [252/707] Linking static target lib/librte_mldev.a 00:01:52.046 [253/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:52.046 [254/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:52.046 [255/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.046 [256/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:52.046 [257/707] Linking static target lib/librte_rawdev.a 00:01:52.046 [258/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:52.046 [259/707] Linking static target lib/librte_pcapng.a 00:01:52.046 [260/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.046 [261/707] Linking static target lib/librte_power.a 00:01:52.046 [262/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:52.046 [263/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.046 [264/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.046 [265/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:52.046 [266/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.046 [267/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:52.046 [268/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:52.046 [269/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.046 [270/707] Linking static target lib/librte_bpf.a 00:01:52.046 [271/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:52.046 [272/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:52.046 [273/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.046 [274/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.046 [275/707] Linking target lib/librte_telemetry.so.24.0 00:01:52.046 [276/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.046 [277/707] Linking static target lib/librte_reorder.a 00:01:52.046 [278/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.046 [279/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:52.306 [280/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.306 [281/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.306 [282/707] Linking static target lib/librte_security.a 00:01:52.306 [283/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.306 [284/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:52.306 [285/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:52.306 [286/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.306 [287/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:52.306 [288/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:52.306 [289/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:52.306 [290/707] Linking static target lib/librte_lpm.a 00:01:52.306 [291/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:52.306 [292/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:52.306 [293/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.306 [294/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.306 [295/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:52.306 [296/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:52.306 [297/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:52.567 [298/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.567 [299/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.567 [300/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:52.567 [301/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:52.567 [302/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:52.567 [303/707] Linking static target lib/librte_rib.a 00:01:52.567 [304/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:52.567 [305/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:52.567 [306/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.567 [307/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:52.567 [308/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:52.567 [309/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:52.567 [310/707] Linking static target lib/librte_efd.a 00:01:52.567 [311/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:52.567 [312/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.567 [313/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.567 [314/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:52.567 [315/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:52.567 [316/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.567 [317/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:52.567 [318/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:52.832 [319/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:52.832 [320/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:52.832 [321/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.832 [322/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.832 [323/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:52.832 [324/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.832 [325/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:52.832 [326/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:52.832 [327/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:52.832 [328/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:52.832 [329/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:52.832 [330/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:52.832 [331/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.832 [332/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:52.832 [333/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:52.832 [334/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:52.833 [335/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:52.833 [336/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:52.833 [337/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.833 [338/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.833 [339/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:52.833 [340/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:52.833 [341/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.091 [342/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.091 [343/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:53.091 [344/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:53.091 [345/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:53.091 [346/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:53.091 [347/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:53.091 [348/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:53.091 [349/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:53.091 [350/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:53.091 [351/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:53.091 [352/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:53.091 [353/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:53.091 [354/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:53.091 [355/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:53.091 [356/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:53.091 [357/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:53.091 [358/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.091 [359/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:53.091 [360/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:53.091 [361/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:53.360 [362/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:53.360 [363/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:53.360 [364/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:53.360 [365/707] Linking static target lib/librte_fib.a 00:01:53.360 [366/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:53.360 [367/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.360 [368/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:53.360 [369/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:53.360 [370/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:53.360 [371/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:53.360 [372/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:53.360 [373/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:53.360 [374/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:53.360 [375/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:53.360 [376/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.360 [377/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:53.625 [378/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:53.625 [379/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:53.625 [380/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:53.625 [381/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:53.625 [382/707] Linking static target lib/librte_pdump.a 00:01:53.625 [383/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:53.625 [384/707] Linking static target lib/librte_graph.a 00:01:53.625 [385/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:53.625 [386/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:53.625 [387/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:53.625 [388/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:53.625 [389/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:53.625 [390/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:53.625 [391/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:53.625 [392/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:53.625 [393/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:53.625 [394/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:53.625 [395/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:53.625 [396/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:53.625 [397/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:53.625 [398/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.625 [399/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:53.885 [400/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.885 [401/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:53.885 [402/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:53.885 [403/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.885 [404/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.885 [405/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:53.885 [406/707] Linking static target lib/librte_cryptodev.a 00:01:53.885 [407/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:53.885 [408/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:53.885 [409/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:53.885 [410/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:53.885 [411/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.885 [412/707] Linking static target lib/librte_table.a 00:01:53.885 [413/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.885 [414/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.885 [415/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:53.885 [416/707] Linking static target drivers/librte_bus_vdev.a 00:01:53.885 [417/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:53.885 [418/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:53.885 [419/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:53.885 [420/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:53.885 [421/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.885 [422/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:53.885 [423/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:53.885 [424/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:53.885 [425/707] Linking static target lib/librte_sched.a 00:01:53.885 [426/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:54.150 [427/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:54.150 [428/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:54.150 [429/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:54.150 [430/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:54.150 [431/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:54.150 [432/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:54.150 [433/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:54.150 [434/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:54.150 [435/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.150 [436/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:54.150 [437/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:54.150 [438/707] Linking static target drivers/librte_bus_pci.a 00:01:54.150 [439/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:54.150 [440/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:54.150 [441/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:54.150 [442/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:54.150 [443/707] Linking static target lib/librte_ipsec.a 00:01:54.151 [444/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:54.151 [445/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:54.151 [446/707] Linking static target lib/librte_member.a 00:01:54.151 [447/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:54.151 [448/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.413 [449/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:54.413 [450/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:54.413 [451/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:54.413 [452/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:54.413 [453/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:54.413 [454/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:54.413 [455/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:54.413 [456/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:54.413 [457/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:54.413 [458/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.413 [459/707] Linking static target lib/librte_hash.a 00:01:54.413 [460/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:54.413 [461/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.413 [462/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:54.413 [463/707] Linking static target lib/librte_node.a 00:01:54.413 [464/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:54.413 [465/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:54.413 [466/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:54.413 [467/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:54.413 [468/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:54.673 [469/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.673 [470/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:54.673 [471/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:54.673 [472/707] Linking static target lib/librte_pdcp.a 00:01:54.673 [473/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:54.673 [474/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:54.673 [475/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:54.673 [476/707] Linking static target lib/acl/libavx2_tmp.a 00:01:54.673 [477/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:54.673 [478/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:54.673 [479/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:54.673 [480/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.673 [481/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:54.673 [482/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:54.673 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:54.673 [484/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:54.673 [485/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:54.673 [486/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:54.673 [487/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:54.673 [488/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.673 [489/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:54.673 [490/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:54.673 [491/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:54.673 [492/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:54.673 [493/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:54.673 [494/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.673 [495/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:54.673 [496/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:54.673 [497/707] Linking static target drivers/librte_mempool_ring.a 00:01:54.673 [498/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.673 [499/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:54.931 [500/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:54.931 [501/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:54.931 [502/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:54.931 [503/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:54.931 [504/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:54.931 [505/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:54.931 [506/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:54.931 [507/707] Linking static target lib/librte_port.a 00:01:54.931 [508/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.931 [509/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:54.931 [510/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:54.931 [511/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:54.931 [512/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.931 [513/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:54.931 [514/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:54.931 [515/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:54.931 [516/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:54.931 [517/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:54.931 [518/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:54.931 [519/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.931 [520/707] Linking static target lib/librte_eventdev.a 00:01:54.931 [521/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:54.931 [522/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:54.931 [523/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:54.931 [524/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:54.931 [525/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:55.190 [526/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:55.190 [527/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.190 [528/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:55.190 [529/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:55.190 [530/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:55.190 [531/707] Linking static target lib/librte_acl.a 00:01:55.190 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:55.190 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:55.190 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:55.190 [535/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:55.190 [536/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:55.190 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:55.190 [538/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:55.190 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:55.190 [540/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:55.190 [541/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:55.190 [542/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:55.190 [543/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:55.190 [544/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:55.190 [545/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:55.448 [546/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:55.448 [547/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:55.448 [548/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:55.448 [549/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.448 [550/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:55.448 [551/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.448 [552/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:55.448 [553/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:55.448 [554/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:55.448 [555/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.448 [556/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:55.448 [557/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:55.448 [558/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:55.448 [559/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:55.448 [560/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:55.448 [561/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:55.448 [562/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:55.448 [563/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:55.705 [564/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:55.705 [565/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:55.705 [566/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:55.705 [567/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:55.705 [568/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:55.705 [569/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:55.963 [570/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.963 [571/707] Linking static target lib/librte_ethdev.a 00:01:55.963 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:55.963 [573/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:55.963 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:56.221 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:56.786 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:56.786 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:57.044 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:57.044 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:57.302 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:57.910 [581/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:57.910 [582/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.910 [583/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:57.910 [584/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:58.191 [585/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:58.191 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:58.191 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.191 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:58.191 [589/707] Linking static target drivers/librte_net_i40e.a 00:01:59.127 [590/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:59.127 [591/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.694 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:01.073 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.073 [594/707] Linking target lib/librte_eal.so.24.0 00:02:01.073 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:01.336 [596/707] Linking target lib/librte_ring.so.24.0 00:02:01.336 [597/707] Linking target lib/librte_timer.so.24.0 00:02:01.336 [598/707] Linking target lib/librte_jobstats.so.24.0 00:02:01.336 [599/707] Linking target lib/librte_dmadev.so.24.0 00:02:01.336 [600/707] Linking target lib/librte_meter.so.24.0 00:02:01.336 [601/707] Linking target lib/librte_pci.so.24.0 00:02:01.336 [602/707] Linking target lib/librte_stack.so.24.0 00:02:01.336 [603/707] Linking target lib/librte_rawdev.so.24.0 00:02:01.336 [604/707] Linking target lib/librte_cfgfile.so.24.0 00:02:01.336 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:01.336 [606/707] Linking target lib/librte_acl.so.24.0 00:02:01.336 [607/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:01.336 [608/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:01.336 [609/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:01.336 [610/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:01.336 [611/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:01.336 [612/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:01.336 [613/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:01.336 [614/707] Linking target lib/librte_rcu.so.24.0 00:02:01.336 [615/707] Linking target lib/librte_mempool.so.24.0 00:02:01.336 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:01.594 [617/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:01.594 [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:01.594 [619/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:01.594 [620/707] Linking target lib/librte_rib.so.24.0 00:02:01.594 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:01.594 [622/707] Linking target lib/librte_mbuf.so.24.0 00:02:01.852 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:01.852 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:01.852 [625/707] Linking target lib/librte_bbdev.so.24.0 00:02:01.852 [626/707] Linking target lib/librte_fib.so.24.0 00:02:01.852 [627/707] Linking target lib/librte_net.so.24.0 00:02:01.852 [628/707] Linking target lib/librte_distributor.so.24.0 00:02:01.852 [629/707] Linking target lib/librte_regexdev.so.24.0 00:02:01.852 [630/707] Linking target lib/librte_gpudev.so.24.0 00:02:01.853 [631/707] Linking target lib/librte_sched.so.24.0 00:02:01.853 [632/707] Linking target lib/librte_compressdev.so.24.0 00:02:01.853 [633/707] Linking target lib/librte_reorder.so.24.0 00:02:01.853 [634/707] Linking target lib/librte_mldev.so.24.0 00:02:01.853 [635/707] Linking target lib/librte_cryptodev.so.24.0 00:02:01.853 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:01.853 [637/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:01.853 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:01.853 [639/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:01.853 [640/707] Linking target lib/librte_cmdline.so.24.0 00:02:01.853 [641/707] Linking target lib/librte_hash.so.24.0 00:02:02.111 [642/707] Linking target lib/librte_security.so.24.0 00:02:02.111 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:02.111 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:02.111 [645/707] Linking target lib/librte_efd.so.24.0 00:02:02.111 [646/707] Linking target lib/librte_lpm.so.24.0 00:02:02.111 [647/707] Linking target lib/librte_member.so.24.0 00:02:02.111 [648/707] Linking target lib/librte_pdcp.so.24.0 00:02:02.111 [649/707] Linking target lib/librte_ipsec.so.24.0 00:02:02.370 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:02.370 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:03.307 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.307 [653/707] Linking target lib/librte_ethdev.so.24.0 00:02:03.307 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:03.307 [655/707] Linking target lib/librte_gso.so.24.0 00:02:03.307 [656/707] Linking target lib/librte_metrics.so.24.0 00:02:03.307 [657/707] Linking target lib/librte_gro.so.24.0 00:02:03.307 [658/707] Linking target lib/librte_pcapng.so.24.0 00:02:03.307 [659/707] Linking target lib/librte_bpf.so.24.0 00:02:03.307 [660/707] Linking target lib/librte_power.so.24.0 00:02:03.307 [661/707] Linking target lib/librte_ip_frag.so.24.0 00:02:03.307 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:03.307 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:03.565 [664/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:03.565 [665/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:03.565 [666/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:03.565 [667/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:03.565 [668/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:03.565 [669/707] Linking target lib/librte_graph.so.24.0 00:02:03.565 [670/707] Linking target lib/librte_bitratestats.so.24.0 00:02:03.565 [671/707] Linking target lib/librte_latencystats.so.24.0 00:02:03.565 [672/707] Linking target lib/librte_dispatcher.so.24.0 00:02:03.565 [673/707] Linking target lib/librte_pdump.so.24.0 00:02:03.565 [674/707] Linking target lib/librte_port.so.24.0 00:02:03.565 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:03.565 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:03.822 [677/707] Linking target lib/librte_node.so.24.0 00:02:03.822 [678/707] Linking target lib/librte_table.so.24.0 00:02:03.822 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:06.378 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:06.379 [681/707] Linking static target lib/librte_pipeline.a 00:02:06.379 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:06.379 [683/707] Linking static target lib/librte_vhost.a 00:02:06.637 [684/707] Linking target app/dpdk-proc-info 00:02:06.637 [685/707] Linking target app/dpdk-test-acl 00:02:06.637 [686/707] Linking target app/dpdk-pdump 00:02:06.637 [687/707] Linking target app/dpdk-test-flow-perf 00:02:06.637 [688/707] Linking target app/dpdk-dumpcap 00:02:06.637 [689/707] Linking target app/dpdk-test-compress-perf 00:02:06.637 [690/707] Linking target app/dpdk-test-crypto-perf 00:02:06.637 [691/707] Linking target app/dpdk-test-gpudev 00:02:06.637 [692/707] Linking target app/dpdk-test-sad 00:02:06.637 [693/707] Linking target app/dpdk-test-security-perf 00:02:06.637 [694/707] Linking target app/dpdk-test-cmdline 00:02:06.637 [695/707] Linking target app/dpdk-test-eventdev 00:02:06.637 [696/707] Linking target app/dpdk-graph 00:02:06.637 [697/707] Linking target app/dpdk-test-fib 00:02:06.637 [698/707] Linking target app/dpdk-test-regex 00:02:06.637 [699/707] Linking target app/dpdk-test-dma-perf 00:02:06.637 [700/707] Linking target app/dpdk-test-mldev 00:02:06.637 [701/707] Linking target app/dpdk-test-pipeline 00:02:06.637 [702/707] Linking target app/dpdk-test-bbdev 00:02:06.637 [703/707] Linking target app/dpdk-testpmd 00:02:08.010 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.010 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:11.305 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.305 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:11.305 11:52:00 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:11.305 11:52:00 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:11.305 11:52:00 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:11.305 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:11.305 [0/1] Installing files. 00:02:11.305 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:11.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:11.310 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:11.310 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.310 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.310 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.310 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.310 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.310 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.310 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.310 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.310 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.311 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.574 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.574 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.574 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.574 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.574 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:11.575 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:11.575 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:11.575 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.575 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:11.575 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.575 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.576 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.577 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.578 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:11.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:11.579 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:11.579 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:11.579 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:11.579 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:11.579 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:11.579 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:11.579 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:11.579 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:11.579 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:11.579 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:11.579 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:11.579 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:11.579 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:11.579 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:11.579 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:11.579 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:11.579 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:11.579 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:11.579 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:11.579 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:11.579 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:11.579 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:11.579 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:11.579 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:11.579 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:11.579 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:11.579 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:11.579 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:11.579 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:11.579 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:11.579 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:11.579 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:11.579 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:11.579 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:11.579 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:11.579 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:11.579 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:11.579 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:11.579 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:11.579 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:11.579 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:11.579 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:11.579 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:11.579 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:11.579 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:11.579 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:11.579 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:11.579 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:11.579 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:11.579 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:11.579 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:11.579 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:11.579 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:11.579 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:11.579 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:11.579 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:11.579 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:11.579 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:11.579 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:11.579 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:11.579 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:11.579 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:11.579 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:11.579 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:11.579 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:11.579 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:11.579 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:11.579 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:11.579 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:11.579 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:11.579 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:11.579 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:11.579 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:11.579 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:11.580 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:11.580 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:11.580 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:11.580 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:11.580 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:11.580 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:11.580 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:11.580 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:11.580 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:11.580 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:11.580 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:11.580 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:11.580 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:11.580 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:11.580 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:11.580 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:11.580 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:11.580 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:11.580 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:11.580 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:11.580 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:11.580 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:11.580 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:11.580 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:11.580 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:11.580 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:11.580 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:11.580 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:11.580 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:11.580 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:11.580 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:11.580 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:11.580 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:11.580 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:11.580 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:11.580 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:11.580 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:11.580 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:11.580 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:11.580 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:11.580 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:11.580 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:11.580 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:11.580 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:11.580 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:11.580 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:11.580 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:11.580 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:11.580 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:11.580 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:11.580 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:11.580 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:11.580 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:11.580 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:11.580 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:11.580 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:11.580 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:11.580 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:11.580 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:11.580 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:11.580 11:52:01 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:11.580 11:52:01 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.580 00:02:11.580 real 0m27.756s 00:02:11.580 user 8m27.865s 00:02:11.580 sys 1m57.270s 00:02:11.580 11:52:01 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:11.580 11:52:01 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:11.580 ************************************ 00:02:11.580 END TEST build_native_dpdk 00:02:11.580 ************************************ 00:02:11.580 11:52:01 -- common/autotest_common.sh@1142 -- $ return 0 00:02:11.580 11:52:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.580 11:52:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.580 11:52:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.580 11:52:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.580 11:52:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.580 11:52:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.580 11:52:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.580 11:52:01 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:11.580 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:11.839 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:11.839 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:11.839 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:12.407 Using 'verbs' RDMA provider 00:02:25.220 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:37.481 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:37.481 Creating mk/config.mk...done. 00:02:37.481 Creating mk/cc.flags.mk...done. 00:02:37.481 Type 'make' to build. 00:02:37.481 11:52:26 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:37.481 11:52:26 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:37.481 11:52:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:37.481 11:52:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.481 ************************************ 00:02:37.481 START TEST make 00:02:37.481 ************************************ 00:02:37.481 11:52:26 make -- common/autotest_common.sh@1123 -- $ make -j96 00:02:37.481 make[1]: Nothing to be done for 'all'. 00:02:38.424 The Meson build system 00:02:38.424 Version: 1.3.1 00:02:38.424 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:38.424 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:38.424 Build type: native build 00:02:38.424 Project name: libvfio-user 00:02:38.424 Project version: 0.0.1 00:02:38.424 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:38.424 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:38.424 Host machine cpu family: x86_64 00:02:38.424 Host machine cpu: x86_64 00:02:38.424 Run-time dependency threads found: YES 00:02:38.424 Library dl found: YES 00:02:38.424 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:38.424 Run-time dependency json-c found: YES 0.17 00:02:38.424 Run-time dependency cmocka found: YES 1.1.7 00:02:38.424 Program pytest-3 found: NO 00:02:38.424 Program flake8 found: NO 00:02:38.424 Program misspell-fixer found: NO 00:02:38.424 Program restructuredtext-lint found: NO 00:02:38.424 Program valgrind found: YES (/usr/bin/valgrind) 00:02:38.424 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:38.424 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:38.424 Compiler for C supports arguments -Wwrite-strings: YES 00:02:38.424 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:38.424 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:38.424 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:38.424 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:38.424 Build targets in project: 8 00:02:38.424 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:38.424 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:38.424 00:02:38.424 libvfio-user 0.0.1 00:02:38.424 00:02:38.424 User defined options 00:02:38.424 buildtype : debug 00:02:38.424 default_library: shared 00:02:38.424 libdir : /usr/local/lib 00:02:38.424 00:02:38.424 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.988 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:38.988 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:38.988 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:38.988 [3/37] Compiling C object samples/null.p/null.c.o 00:02:38.989 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:38.989 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:38.989 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:38.989 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:38.989 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:38.989 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:38.989 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:38.989 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:38.989 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:38.989 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:38.989 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:38.989 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:38.989 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:38.989 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:38.989 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:38.989 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:38.989 [20/37] Compiling C object samples/client.p/client.c.o 00:02:38.989 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:38.989 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:38.989 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:38.989 [24/37] Compiling C object samples/server.p/server.c.o 00:02:38.989 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:38.989 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:39.246 [27/37] Linking target samples/client 00:02:39.246 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:39.246 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:39.246 [30/37] Linking target test/unit_tests 00:02:39.246 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:39.504 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:39.504 [33/37] Linking target samples/server 00:02:39.504 [34/37] Linking target samples/lspci 00:02:39.504 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:39.504 [36/37] Linking target samples/gpio-pci-idio-16 00:02:39.504 [37/37] Linking target samples/null 00:02:39.504 INFO: autodetecting backend as ninja 00:02:39.505 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:39.505 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:39.769 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:39.769 ninja: no work to do. 00:02:47.885 CC lib/ut/ut.o 00:02:47.885 CC lib/ut_mock/mock.o 00:02:47.885 CC lib/log/log.o 00:02:47.885 CC lib/log/log_flags.o 00:02:47.885 CC lib/log/log_deprecated.o 00:02:47.885 LIB libspdk_ut.a 00:02:47.885 LIB libspdk_ut_mock.a 00:02:47.885 LIB libspdk_log.a 00:02:47.885 SO libspdk_ut_mock.so.6.0 00:02:47.885 SO libspdk_ut.so.2.0 00:02:47.885 SO libspdk_log.so.7.0 00:02:47.885 SYMLINK libspdk_ut_mock.so 00:02:47.885 SYMLINK libspdk_ut.so 00:02:47.885 SYMLINK libspdk_log.so 00:02:48.143 CXX lib/trace_parser/trace.o 00:02:48.143 CC lib/util/base64.o 00:02:48.143 CC lib/ioat/ioat.o 00:02:48.143 CC lib/util/cpuset.o 00:02:48.143 CC lib/util/bit_array.o 00:02:48.143 CC lib/dma/dma.o 00:02:48.143 CC lib/util/crc16.o 00:02:48.143 CC lib/util/crc32.o 00:02:48.143 CC lib/util/crc32c.o 00:02:48.143 CC lib/util/crc32_ieee.o 00:02:48.143 CC lib/util/crc64.o 00:02:48.143 CC lib/util/dif.o 00:02:48.143 CC lib/util/fd.o 00:02:48.143 CC lib/util/file.o 00:02:48.143 CC lib/util/hexlify.o 00:02:48.143 CC lib/util/iov.o 00:02:48.143 CC lib/util/math.o 00:02:48.143 CC lib/util/pipe.o 00:02:48.143 CC lib/util/strerror_tls.o 00:02:48.143 CC lib/util/string.o 00:02:48.143 CC lib/util/uuid.o 00:02:48.143 CC lib/util/xor.o 00:02:48.143 CC lib/util/fd_group.o 00:02:48.143 CC lib/util/zipf.o 00:02:48.401 CC lib/vfio_user/host/vfio_user.o 00:02:48.401 CC lib/vfio_user/host/vfio_user_pci.o 00:02:48.401 LIB libspdk_dma.a 00:02:48.401 SO libspdk_dma.so.4.0 00:02:48.401 LIB libspdk_ioat.a 00:02:48.401 SYMLINK libspdk_dma.so 00:02:48.401 SO libspdk_ioat.so.7.0 00:02:48.659 LIB libspdk_vfio_user.a 00:02:48.659 SYMLINK libspdk_ioat.so 00:02:48.659 SO libspdk_vfio_user.so.5.0 00:02:48.659 LIB libspdk_util.a 00:02:48.659 SYMLINK libspdk_vfio_user.so 00:02:48.659 SO libspdk_util.so.9.1 00:02:48.918 SYMLINK libspdk_util.so 00:02:48.918 LIB libspdk_trace_parser.a 00:02:48.918 SO libspdk_trace_parser.so.5.0 00:02:49.176 SYMLINK libspdk_trace_parser.so 00:02:49.176 CC lib/json/json_parse.o 00:02:49.176 CC lib/rdma_provider/common.o 00:02:49.176 CC lib/json/json_util.o 00:02:49.176 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:49.176 CC lib/vmd/vmd.o 00:02:49.176 CC lib/json/json_write.o 00:02:49.176 CC lib/vmd/led.o 00:02:49.176 CC lib/conf/conf.o 00:02:49.176 CC lib/idxd/idxd.o 00:02:49.176 CC lib/idxd/idxd_user.o 00:02:49.176 CC lib/env_dpdk/env.o 00:02:49.176 CC lib/idxd/idxd_kernel.o 00:02:49.176 CC lib/env_dpdk/memory.o 00:02:49.176 CC lib/env_dpdk/pci.o 00:02:49.176 CC lib/env_dpdk/init.o 00:02:49.176 CC lib/rdma_utils/rdma_utils.o 00:02:49.176 CC lib/env_dpdk/threads.o 00:02:49.176 CC lib/env_dpdk/pci_ioat.o 00:02:49.176 CC lib/env_dpdk/pci_virtio.o 00:02:49.176 CC lib/env_dpdk/pci_vmd.o 00:02:49.176 CC lib/env_dpdk/pci_idxd.o 00:02:49.176 CC lib/env_dpdk/pci_event.o 00:02:49.176 CC lib/env_dpdk/sigbus_handler.o 00:02:49.176 CC lib/env_dpdk/pci_dpdk.o 00:02:49.176 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:49.176 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.434 LIB libspdk_rdma_provider.a 00:02:49.434 LIB libspdk_conf.a 00:02:49.434 SO libspdk_conf.so.6.0 00:02:49.434 SO libspdk_rdma_provider.so.6.0 00:02:49.434 LIB libspdk_json.a 00:02:49.434 LIB libspdk_rdma_utils.a 00:02:49.434 SYMLINK libspdk_conf.so 00:02:49.434 SO libspdk_rdma_utils.so.1.0 00:02:49.434 SO libspdk_json.so.6.0 00:02:49.434 SYMLINK libspdk_rdma_provider.so 00:02:49.434 SYMLINK libspdk_rdma_utils.so 00:02:49.434 SYMLINK libspdk_json.so 00:02:49.693 LIB libspdk_idxd.a 00:02:49.693 SO libspdk_idxd.so.12.0 00:02:49.693 LIB libspdk_vmd.a 00:02:49.693 SYMLINK libspdk_idxd.so 00:02:49.693 SO libspdk_vmd.so.6.0 00:02:49.693 SYMLINK libspdk_vmd.so 00:02:49.950 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.950 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.950 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.950 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:49.950 LIB libspdk_jsonrpc.a 00:02:50.208 SO libspdk_jsonrpc.so.6.0 00:02:50.208 SYMLINK libspdk_jsonrpc.so 00:02:50.208 LIB libspdk_env_dpdk.a 00:02:50.208 SO libspdk_env_dpdk.so.14.1 00:02:50.466 SYMLINK libspdk_env_dpdk.so 00:02:50.466 CC lib/rpc/rpc.o 00:02:50.724 LIB libspdk_rpc.a 00:02:50.724 SO libspdk_rpc.so.6.0 00:02:50.724 SYMLINK libspdk_rpc.so 00:02:50.982 CC lib/trace/trace.o 00:02:50.982 CC lib/trace/trace_flags.o 00:02:50.982 CC lib/keyring/keyring.o 00:02:50.982 CC lib/trace/trace_rpc.o 00:02:50.982 CC lib/notify/notify.o 00:02:50.982 CC lib/keyring/keyring_rpc.o 00:02:50.982 CC lib/notify/notify_rpc.o 00:02:51.240 LIB libspdk_notify.a 00:02:51.240 SO libspdk_notify.so.6.0 00:02:51.240 LIB libspdk_keyring.a 00:02:51.240 LIB libspdk_trace.a 00:02:51.240 SO libspdk_keyring.so.1.0 00:02:51.240 SO libspdk_trace.so.10.0 00:02:51.240 SYMLINK libspdk_notify.so 00:02:51.240 SYMLINK libspdk_keyring.so 00:02:51.499 SYMLINK libspdk_trace.so 00:02:51.817 CC lib/thread/thread.o 00:02:51.817 CC lib/sock/sock.o 00:02:51.817 CC lib/thread/iobuf.o 00:02:51.817 CC lib/sock/sock_rpc.o 00:02:52.076 LIB libspdk_sock.a 00:02:52.076 SO libspdk_sock.so.10.0 00:02:52.076 SYMLINK libspdk_sock.so 00:02:52.336 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.336 CC lib/nvme/nvme_ctrlr.o 00:02:52.336 CC lib/nvme/nvme_fabric.o 00:02:52.336 CC lib/nvme/nvme_ns_cmd.o 00:02:52.336 CC lib/nvme/nvme_ns.o 00:02:52.336 CC lib/nvme/nvme_pcie_common.o 00:02:52.336 CC lib/nvme/nvme_pcie.o 00:02:52.336 CC lib/nvme/nvme_qpair.o 00:02:52.336 CC lib/nvme/nvme.o 00:02:52.336 CC lib/nvme/nvme_quirks.o 00:02:52.336 CC lib/nvme/nvme_transport.o 00:02:52.336 CC lib/nvme/nvme_discovery.o 00:02:52.336 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.336 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.336 CC lib/nvme/nvme_tcp.o 00:02:52.336 CC lib/nvme/nvme_opal.o 00:02:52.336 CC lib/nvme/nvme_io_msg.o 00:02:52.336 CC lib/nvme/nvme_poll_group.o 00:02:52.336 CC lib/nvme/nvme_zns.o 00:02:52.336 CC lib/nvme/nvme_stubs.o 00:02:52.336 CC lib/nvme/nvme_auth.o 00:02:52.336 CC lib/nvme/nvme_cuse.o 00:02:52.336 CC lib/nvme/nvme_vfio_user.o 00:02:52.336 CC lib/nvme/nvme_rdma.o 00:02:52.903 LIB libspdk_thread.a 00:02:52.903 SO libspdk_thread.so.10.1 00:02:52.903 SYMLINK libspdk_thread.so 00:02:53.162 CC lib/accel/accel.o 00:02:53.162 CC lib/accel/accel_rpc.o 00:02:53.162 CC lib/accel/accel_sw.o 00:02:53.162 CC lib/virtio/virtio.o 00:02:53.162 CC lib/virtio/virtio_vhost_user.o 00:02:53.162 CC lib/virtio/virtio_vfio_user.o 00:02:53.162 CC lib/virtio/virtio_pci.o 00:02:53.162 CC lib/vfu_tgt/tgt_endpoint.o 00:02:53.162 CC lib/vfu_tgt/tgt_rpc.o 00:02:53.162 CC lib/init/json_config.o 00:02:53.162 CC lib/blob/zeroes.o 00:02:53.162 CC lib/blob/blobstore.o 00:02:53.162 CC lib/init/subsystem.o 00:02:53.162 CC lib/blob/request.o 00:02:53.162 CC lib/init/subsystem_rpc.o 00:02:53.162 CC lib/init/rpc.o 00:02:53.162 CC lib/blob/blob_bs_dev.o 00:02:53.420 LIB libspdk_init.a 00:02:53.420 SO libspdk_init.so.5.0 00:02:53.420 LIB libspdk_virtio.a 00:02:53.420 LIB libspdk_vfu_tgt.a 00:02:53.420 SO libspdk_vfu_tgt.so.3.0 00:02:53.420 SO libspdk_virtio.so.7.0 00:02:53.420 SYMLINK libspdk_init.so 00:02:53.420 SYMLINK libspdk_vfu_tgt.so 00:02:53.420 SYMLINK libspdk_virtio.so 00:02:53.679 CC lib/event/app.o 00:02:53.680 CC lib/event/reactor.o 00:02:53.680 CC lib/event/log_rpc.o 00:02:53.680 CC lib/event/app_rpc.o 00:02:53.680 CC lib/event/scheduler_static.o 00:02:53.939 LIB libspdk_accel.a 00:02:53.939 SO libspdk_accel.so.15.1 00:02:53.939 SYMLINK libspdk_accel.so 00:02:53.939 LIB libspdk_nvme.a 00:02:53.939 LIB libspdk_event.a 00:02:54.198 SO libspdk_nvme.so.13.1 00:02:54.198 SO libspdk_event.so.14.0 00:02:54.198 SYMLINK libspdk_event.so 00:02:54.198 CC lib/bdev/bdev.o 00:02:54.198 CC lib/bdev/bdev_rpc.o 00:02:54.198 CC lib/bdev/bdev_zone.o 00:02:54.198 CC lib/bdev/part.o 00:02:54.198 CC lib/bdev/scsi_nvme.o 00:02:54.457 SYMLINK libspdk_nvme.so 00:02:55.391 LIB libspdk_blob.a 00:02:55.391 SO libspdk_blob.so.11.0 00:02:55.391 SYMLINK libspdk_blob.so 00:02:55.650 CC lib/lvol/lvol.o 00:02:55.650 CC lib/blobfs/blobfs.o 00:02:55.650 CC lib/blobfs/tree.o 00:02:55.911 LIB libspdk_bdev.a 00:02:56.169 SO libspdk_bdev.so.15.1 00:02:56.169 SYMLINK libspdk_bdev.so 00:02:56.169 LIB libspdk_blobfs.a 00:02:56.169 SO libspdk_blobfs.so.10.0 00:02:56.169 LIB libspdk_lvol.a 00:02:56.169 SO libspdk_lvol.so.10.0 00:02:56.426 SYMLINK libspdk_blobfs.so 00:02:56.426 SYMLINK libspdk_lvol.so 00:02:56.426 CC lib/ublk/ublk.o 00:02:56.426 CC lib/ublk/ublk_rpc.o 00:02:56.426 CC lib/scsi/dev.o 00:02:56.426 CC lib/scsi/lun.o 00:02:56.426 CC lib/scsi/port.o 00:02:56.427 CC lib/scsi/scsi.o 00:02:56.427 CC lib/nbd/nbd.o 00:02:56.427 CC lib/scsi/scsi_bdev.o 00:02:56.427 CC lib/nbd/nbd_rpc.o 00:02:56.427 CC lib/nvmf/ctrlr.o 00:02:56.427 CC lib/scsi/scsi_pr.o 00:02:56.427 CC lib/nvmf/ctrlr_discovery.o 00:02:56.427 CC lib/scsi/scsi_rpc.o 00:02:56.427 CC lib/nvmf/ctrlr_bdev.o 00:02:56.427 CC lib/scsi/task.o 00:02:56.427 CC lib/nvmf/subsystem.o 00:02:56.427 CC lib/ftl/ftl_core.o 00:02:56.427 CC lib/nvmf/nvmf.o 00:02:56.427 CC lib/ftl/ftl_init.o 00:02:56.427 CC lib/nvmf/nvmf_rpc.o 00:02:56.427 CC lib/ftl/ftl_layout.o 00:02:56.427 CC lib/nvmf/transport.o 00:02:56.427 CC lib/ftl/ftl_debug.o 00:02:56.427 CC lib/nvmf/tcp.o 00:02:56.427 CC lib/ftl/ftl_io.o 00:02:56.427 CC lib/nvmf/stubs.o 00:02:56.427 CC lib/nvmf/mdns_server.o 00:02:56.427 CC lib/ftl/ftl_sb.o 00:02:56.427 CC lib/nvmf/vfio_user.o 00:02:56.427 CC lib/ftl/ftl_l2p.o 00:02:56.427 CC lib/ftl/ftl_l2p_flat.o 00:02:56.427 CC lib/nvmf/rdma.o 00:02:56.427 CC lib/nvmf/auth.o 00:02:56.427 CC lib/ftl/ftl_nv_cache.o 00:02:56.427 CC lib/ftl/ftl_band.o 00:02:56.427 CC lib/ftl/ftl_band_ops.o 00:02:56.427 CC lib/ftl/ftl_writer.o 00:02:56.427 CC lib/ftl/ftl_rq.o 00:02:56.427 CC lib/ftl/ftl_reloc.o 00:02:56.427 CC lib/ftl/ftl_p2l.o 00:02:56.427 CC lib/ftl/ftl_l2p_cache.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:56.427 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:56.427 CC lib/ftl/utils/ftl_conf.o 00:02:56.427 CC lib/ftl/utils/ftl_md.o 00:02:56.427 CC lib/ftl/utils/ftl_bitmap.o 00:02:56.427 CC lib/ftl/utils/ftl_mempool.o 00:02:56.427 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:56.427 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:56.427 CC lib/ftl/utils/ftl_property.o 00:02:56.427 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:56.427 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:56.427 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:56.427 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:56.427 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:56.427 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:56.427 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:56.427 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:56.427 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:56.427 CC lib/ftl/base/ftl_base_dev.o 00:02:56.427 CC lib/ftl/base/ftl_base_bdev.o 00:02:56.427 CC lib/ftl/ftl_trace.o 00:02:56.993 LIB libspdk_nbd.a 00:02:56.993 SO libspdk_nbd.so.7.0 00:02:56.993 SYMLINK libspdk_nbd.so 00:02:56.993 LIB libspdk_scsi.a 00:02:57.251 SO libspdk_scsi.so.9.0 00:02:57.251 LIB libspdk_ublk.a 00:02:57.251 SO libspdk_ublk.so.3.0 00:02:57.251 SYMLINK libspdk_scsi.so 00:02:57.251 SYMLINK libspdk_ublk.so 00:02:57.510 LIB libspdk_ftl.a 00:02:57.510 CC lib/iscsi/conn.o 00:02:57.510 CC lib/iscsi/init_grp.o 00:02:57.510 CC lib/iscsi/iscsi.o 00:02:57.510 CC lib/iscsi/md5.o 00:02:57.510 CC lib/iscsi/param.o 00:02:57.510 CC lib/iscsi/portal_grp.o 00:02:57.510 CC lib/iscsi/tgt_node.o 00:02:57.510 CC lib/iscsi/iscsi_subsystem.o 00:02:57.510 CC lib/vhost/vhost.o 00:02:57.510 CC lib/iscsi/iscsi_rpc.o 00:02:57.510 CC lib/vhost/vhost_rpc.o 00:02:57.510 CC lib/iscsi/task.o 00:02:57.510 CC lib/vhost/vhost_scsi.o 00:02:57.510 CC lib/vhost/vhost_blk.o 00:02:57.510 CC lib/vhost/rte_vhost_user.o 00:02:57.510 SO libspdk_ftl.so.9.0 00:02:57.768 SYMLINK libspdk_ftl.so 00:02:58.027 LIB libspdk_nvmf.a 00:02:58.285 SO libspdk_nvmf.so.18.1 00:02:58.285 LIB libspdk_vhost.a 00:02:58.285 SYMLINK libspdk_nvmf.so 00:02:58.285 SO libspdk_vhost.so.8.0 00:02:58.544 SYMLINK libspdk_vhost.so 00:02:58.544 LIB libspdk_iscsi.a 00:02:58.544 SO libspdk_iscsi.so.8.0 00:02:58.804 SYMLINK libspdk_iscsi.so 00:02:59.372 CC module/env_dpdk/env_dpdk_rpc.o 00:02:59.372 CC module/vfu_device/vfu_virtio.o 00:02:59.372 CC module/vfu_device/vfu_virtio_blk.o 00:02:59.372 CC module/vfu_device/vfu_virtio_scsi.o 00:02:59.372 CC module/vfu_device/vfu_virtio_rpc.o 00:02:59.372 LIB libspdk_env_dpdk_rpc.a 00:02:59.372 CC module/keyring/linux/keyring_rpc.o 00:02:59.372 CC module/keyring/linux/keyring.o 00:02:59.372 CC module/accel/dsa/accel_dsa.o 00:02:59.372 CC module/accel/error/accel_error.o 00:02:59.372 CC module/accel/dsa/accel_dsa_rpc.o 00:02:59.372 CC module/accel/error/accel_error_rpc.o 00:02:59.372 CC module/accel/ioat/accel_ioat.o 00:02:59.372 CC module/accel/ioat/accel_ioat_rpc.o 00:02:59.372 CC module/accel/iaa/accel_iaa.o 00:02:59.372 CC module/accel/iaa/accel_iaa_rpc.o 00:02:59.372 CC module/blob/bdev/blob_bdev.o 00:02:59.372 CC module/scheduler/gscheduler/gscheduler.o 00:02:59.372 CC module/sock/posix/posix.o 00:02:59.372 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:59.372 CC module/keyring/file/keyring.o 00:02:59.372 CC module/keyring/file/keyring_rpc.o 00:02:59.372 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:59.372 SO libspdk_env_dpdk_rpc.so.6.0 00:02:59.372 SYMLINK libspdk_env_dpdk_rpc.so 00:02:59.631 LIB libspdk_keyring_linux.a 00:02:59.631 LIB libspdk_keyring_file.a 00:02:59.631 LIB libspdk_scheduler_gscheduler.a 00:02:59.631 LIB libspdk_accel_ioat.a 00:02:59.631 LIB libspdk_accel_error.a 00:02:59.631 LIB libspdk_scheduler_dpdk_governor.a 00:02:59.631 SO libspdk_keyring_linux.so.1.0 00:02:59.631 SO libspdk_accel_ioat.so.6.0 00:02:59.631 LIB libspdk_accel_iaa.a 00:02:59.631 SO libspdk_scheduler_gscheduler.so.4.0 00:02:59.631 LIB libspdk_scheduler_dynamic.a 00:02:59.631 SO libspdk_accel_error.so.2.0 00:02:59.632 SO libspdk_keyring_file.so.1.0 00:02:59.632 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:59.632 LIB libspdk_accel_dsa.a 00:02:59.632 LIB libspdk_blob_bdev.a 00:02:59.632 SO libspdk_accel_iaa.so.3.0 00:02:59.632 SYMLINK libspdk_keyring_linux.so 00:02:59.632 SO libspdk_scheduler_dynamic.so.4.0 00:02:59.632 SO libspdk_accel_dsa.so.5.0 00:02:59.632 SO libspdk_blob_bdev.so.11.0 00:02:59.632 SYMLINK libspdk_keyring_file.so 00:02:59.632 SYMLINK libspdk_accel_ioat.so 00:02:59.632 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:59.632 SYMLINK libspdk_scheduler_gscheduler.so 00:02:59.632 SYMLINK libspdk_accel_error.so 00:02:59.632 SYMLINK libspdk_scheduler_dynamic.so 00:02:59.632 SYMLINK libspdk_accel_iaa.so 00:02:59.632 SYMLINK libspdk_accel_dsa.so 00:02:59.632 SYMLINK libspdk_blob_bdev.so 00:02:59.632 LIB libspdk_vfu_device.a 00:02:59.890 SO libspdk_vfu_device.so.3.0 00:02:59.890 SYMLINK libspdk_vfu_device.so 00:02:59.890 LIB libspdk_sock_posix.a 00:02:59.890 SO libspdk_sock_posix.so.6.0 00:03:00.149 SYMLINK libspdk_sock_posix.so 00:03:00.149 CC module/bdev/gpt/gpt.o 00:03:00.149 CC module/bdev/gpt/vbdev_gpt.o 00:03:00.149 CC module/bdev/error/vbdev_error.o 00:03:00.149 CC module/bdev/error/vbdev_error_rpc.o 00:03:00.149 CC module/bdev/split/vbdev_split.o 00:03:00.149 CC module/bdev/ftl/bdev_ftl.o 00:03:00.149 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:00.149 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.149 CC module/bdev/aio/bdev_aio.o 00:03:00.149 CC module/bdev/delay/vbdev_delay.o 00:03:00.149 CC module/blobfs/bdev/blobfs_bdev.o 00:03:00.149 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:00.149 CC module/bdev/aio/bdev_aio_rpc.o 00:03:00.149 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:00.149 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:00.149 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:00.149 CC module/bdev/nvme/bdev_nvme.o 00:03:00.149 CC module/bdev/lvol/vbdev_lvol.o 00:03:00.149 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:00.149 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:00.149 CC module/bdev/nvme/nvme_rpc.o 00:03:00.149 CC module/bdev/nvme/bdev_mdns_client.o 00:03:00.149 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:00.149 CC module/bdev/nvme/vbdev_opal.o 00:03:00.149 CC module/bdev/passthru/vbdev_passthru.o 00:03:00.149 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:00.149 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:00.149 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:00.149 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:00.149 CC module/bdev/iscsi/bdev_iscsi.o 00:03:00.149 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:00.149 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:00.149 CC module/bdev/malloc/bdev_malloc.o 00:03:00.149 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:00.149 CC module/bdev/null/bdev_null.o 00:03:00.149 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.149 CC module/bdev/raid/bdev_raid.o 00:03:00.149 CC module/bdev/null/bdev_null_rpc.o 00:03:00.149 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.149 CC module/bdev/raid/raid1.o 00:03:00.149 CC module/bdev/raid/raid0.o 00:03:00.149 CC module/bdev/raid/concat.o 00:03:00.408 LIB libspdk_blobfs_bdev.a 00:03:00.408 LIB libspdk_bdev_split.a 00:03:00.408 SO libspdk_blobfs_bdev.so.6.0 00:03:00.408 LIB libspdk_bdev_gpt.a 00:03:00.408 LIB libspdk_bdev_error.a 00:03:00.408 SO libspdk_bdev_error.so.6.0 00:03:00.408 SO libspdk_bdev_gpt.so.6.0 00:03:00.408 SO libspdk_bdev_split.so.6.0 00:03:00.408 LIB libspdk_bdev_null.a 00:03:00.408 LIB libspdk_bdev_passthru.a 00:03:00.408 LIB libspdk_bdev_aio.a 00:03:00.408 LIB libspdk_bdev_ftl.a 00:03:00.408 SYMLINK libspdk_blobfs_bdev.so 00:03:00.408 SO libspdk_bdev_null.so.6.0 00:03:00.408 SYMLINK libspdk_bdev_split.so 00:03:00.408 SO libspdk_bdev_passthru.so.6.0 00:03:00.408 SYMLINK libspdk_bdev_gpt.so 00:03:00.408 SYMLINK libspdk_bdev_error.so 00:03:00.408 SO libspdk_bdev_ftl.so.6.0 00:03:00.408 LIB libspdk_bdev_zone_block.a 00:03:00.408 SO libspdk_bdev_aio.so.6.0 00:03:00.667 LIB libspdk_bdev_malloc.a 00:03:00.667 LIB libspdk_bdev_delay.a 00:03:00.667 LIB libspdk_bdev_iscsi.a 00:03:00.667 SO libspdk_bdev_zone_block.so.6.0 00:03:00.667 SYMLINK libspdk_bdev_null.so 00:03:00.668 SO libspdk_bdev_malloc.so.6.0 00:03:00.668 SYMLINK libspdk_bdev_passthru.so 00:03:00.668 SO libspdk_bdev_delay.so.6.0 00:03:00.668 SYMLINK libspdk_bdev_ftl.so 00:03:00.668 SYMLINK libspdk_bdev_aio.so 00:03:00.668 SO libspdk_bdev_iscsi.so.6.0 00:03:00.668 SYMLINK libspdk_bdev_zone_block.so 00:03:00.668 LIB libspdk_bdev_lvol.a 00:03:00.668 SYMLINK libspdk_bdev_malloc.so 00:03:00.668 SYMLINK libspdk_bdev_delay.so 00:03:00.668 SYMLINK libspdk_bdev_iscsi.so 00:03:00.668 SO libspdk_bdev_lvol.so.6.0 00:03:00.668 LIB libspdk_bdev_virtio.a 00:03:00.668 SO libspdk_bdev_virtio.so.6.0 00:03:00.668 SYMLINK libspdk_bdev_lvol.so 00:03:00.668 SYMLINK libspdk_bdev_virtio.so 00:03:00.926 LIB libspdk_bdev_raid.a 00:03:00.926 SO libspdk_bdev_raid.so.6.0 00:03:01.186 SYMLINK libspdk_bdev_raid.so 00:03:01.753 LIB libspdk_bdev_nvme.a 00:03:01.753 SO libspdk_bdev_nvme.so.7.0 00:03:02.013 SYMLINK libspdk_bdev_nvme.so 00:03:02.583 CC module/event/subsystems/iobuf/iobuf.o 00:03:02.583 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:02.583 CC module/event/subsystems/vmd/vmd.o 00:03:02.583 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:02.583 CC module/event/subsystems/scheduler/scheduler.o 00:03:02.583 CC module/event/subsystems/sock/sock.o 00:03:02.583 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:02.583 CC module/event/subsystems/keyring/keyring.o 00:03:02.583 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:02.842 LIB libspdk_event_sock.a 00:03:02.842 LIB libspdk_event_scheduler.a 00:03:02.842 LIB libspdk_event_vmd.a 00:03:02.842 LIB libspdk_event_keyring.a 00:03:02.842 LIB libspdk_event_vhost_blk.a 00:03:02.842 LIB libspdk_event_vfu_tgt.a 00:03:02.842 LIB libspdk_event_iobuf.a 00:03:02.842 SO libspdk_event_sock.so.5.0 00:03:02.842 SO libspdk_event_vhost_blk.so.3.0 00:03:02.842 SO libspdk_event_scheduler.so.4.0 00:03:02.842 SO libspdk_event_vmd.so.6.0 00:03:02.842 SO libspdk_event_keyring.so.1.0 00:03:02.842 SO libspdk_event_vfu_tgt.so.3.0 00:03:02.842 SO libspdk_event_iobuf.so.3.0 00:03:02.842 SYMLINK libspdk_event_sock.so 00:03:02.842 SYMLINK libspdk_event_keyring.so 00:03:02.842 SYMLINK libspdk_event_vhost_blk.so 00:03:02.842 SYMLINK libspdk_event_scheduler.so 00:03:02.842 SYMLINK libspdk_event_vfu_tgt.so 00:03:02.842 SYMLINK libspdk_event_vmd.so 00:03:02.842 SYMLINK libspdk_event_iobuf.so 00:03:03.102 CC module/event/subsystems/accel/accel.o 00:03:03.361 LIB libspdk_event_accel.a 00:03:03.361 SO libspdk_event_accel.so.6.0 00:03:03.361 SYMLINK libspdk_event_accel.so 00:03:03.643 CC module/event/subsystems/bdev/bdev.o 00:03:03.902 LIB libspdk_event_bdev.a 00:03:03.902 SO libspdk_event_bdev.so.6.0 00:03:03.902 SYMLINK libspdk_event_bdev.so 00:03:04.161 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:04.161 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:04.161 CC module/event/subsystems/scsi/scsi.o 00:03:04.161 CC module/event/subsystems/ublk/ublk.o 00:03:04.161 CC module/event/subsystems/nbd/nbd.o 00:03:04.419 LIB libspdk_event_ublk.a 00:03:04.419 LIB libspdk_event_nbd.a 00:03:04.419 LIB libspdk_event_scsi.a 00:03:04.419 SO libspdk_event_nbd.so.6.0 00:03:04.419 SO libspdk_event_ublk.so.3.0 00:03:04.419 SO libspdk_event_scsi.so.6.0 00:03:04.419 LIB libspdk_event_nvmf.a 00:03:04.419 SYMLINK libspdk_event_nbd.so 00:03:04.419 SO libspdk_event_nvmf.so.6.0 00:03:04.419 SYMLINK libspdk_event_ublk.so 00:03:04.419 SYMLINK libspdk_event_scsi.so 00:03:04.678 SYMLINK libspdk_event_nvmf.so 00:03:04.937 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:04.938 CC module/event/subsystems/iscsi/iscsi.o 00:03:04.938 LIB libspdk_event_vhost_scsi.a 00:03:04.938 LIB libspdk_event_iscsi.a 00:03:04.938 SO libspdk_event_vhost_scsi.so.3.0 00:03:04.938 SO libspdk_event_iscsi.so.6.0 00:03:04.938 SYMLINK libspdk_event_vhost_scsi.so 00:03:05.196 SYMLINK libspdk_event_iscsi.so 00:03:05.196 SO libspdk.so.6.0 00:03:05.196 SYMLINK libspdk.so 00:03:05.455 CXX app/trace/trace.o 00:03:05.455 CC app/trace_record/trace_record.o 00:03:05.455 CC app/spdk_top/spdk_top.o 00:03:05.456 TEST_HEADER include/spdk/accel.h 00:03:05.456 CC app/spdk_nvme_perf/perf.o 00:03:05.456 TEST_HEADER include/spdk/accel_module.h 00:03:05.456 TEST_HEADER include/spdk/assert.h 00:03:05.733 TEST_HEADER include/spdk/barrier.h 00:03:05.733 CC app/spdk_nvme_identify/identify.o 00:03:05.733 TEST_HEADER include/spdk/bdev_module.h 00:03:05.733 CC test/rpc_client/rpc_client_test.o 00:03:05.733 TEST_HEADER include/spdk/base64.h 00:03:05.733 TEST_HEADER include/spdk/bdev.h 00:03:05.733 CC app/spdk_lspci/spdk_lspci.o 00:03:05.733 TEST_HEADER include/spdk/bit_array.h 00:03:05.733 TEST_HEADER include/spdk/bdev_zone.h 00:03:05.733 TEST_HEADER include/spdk/bit_pool.h 00:03:05.733 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:05.733 TEST_HEADER include/spdk/blob_bdev.h 00:03:05.733 TEST_HEADER include/spdk/blobfs.h 00:03:05.733 TEST_HEADER include/spdk/blob.h 00:03:05.733 CC app/spdk_nvme_discover/discovery_aer.o 00:03:05.733 TEST_HEADER include/spdk/config.h 00:03:05.733 TEST_HEADER include/spdk/conf.h 00:03:05.733 TEST_HEADER include/spdk/cpuset.h 00:03:05.733 TEST_HEADER include/spdk/crc16.h 00:03:05.733 TEST_HEADER include/spdk/crc32.h 00:03:05.733 TEST_HEADER include/spdk/crc64.h 00:03:05.733 TEST_HEADER include/spdk/dma.h 00:03:05.733 TEST_HEADER include/spdk/dif.h 00:03:05.733 TEST_HEADER include/spdk/env_dpdk.h 00:03:05.733 TEST_HEADER include/spdk/env.h 00:03:05.733 TEST_HEADER include/spdk/endian.h 00:03:05.733 TEST_HEADER include/spdk/event.h 00:03:05.733 TEST_HEADER include/spdk/fd_group.h 00:03:05.733 TEST_HEADER include/spdk/fd.h 00:03:05.733 TEST_HEADER include/spdk/file.h 00:03:05.733 TEST_HEADER include/spdk/ftl.h 00:03:05.733 TEST_HEADER include/spdk/gpt_spec.h 00:03:05.733 TEST_HEADER include/spdk/histogram_data.h 00:03:05.733 TEST_HEADER include/spdk/idxd_spec.h 00:03:05.733 TEST_HEADER include/spdk/idxd.h 00:03:05.733 TEST_HEADER include/spdk/init.h 00:03:05.733 TEST_HEADER include/spdk/hexlify.h 00:03:05.733 TEST_HEADER include/spdk/iscsi_spec.h 00:03:05.733 TEST_HEADER include/spdk/ioat_spec.h 00:03:05.733 TEST_HEADER include/spdk/ioat.h 00:03:05.733 TEST_HEADER include/spdk/json.h 00:03:05.733 TEST_HEADER include/spdk/keyring.h 00:03:05.733 TEST_HEADER include/spdk/keyring_module.h 00:03:05.733 TEST_HEADER include/spdk/jsonrpc.h 00:03:05.733 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:05.733 TEST_HEADER include/spdk/likely.h 00:03:05.733 CC app/nvmf_tgt/nvmf_main.o 00:03:05.733 TEST_HEADER include/spdk/lvol.h 00:03:05.733 TEST_HEADER include/spdk/log.h 00:03:05.733 TEST_HEADER include/spdk/memory.h 00:03:05.733 TEST_HEADER include/spdk/mmio.h 00:03:05.733 TEST_HEADER include/spdk/nbd.h 00:03:05.733 TEST_HEADER include/spdk/nvme_intel.h 00:03:05.733 TEST_HEADER include/spdk/notify.h 00:03:05.733 TEST_HEADER include/spdk/nvme.h 00:03:05.733 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:05.733 TEST_HEADER include/spdk/nvme_zns.h 00:03:05.733 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:05.733 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:05.733 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:05.733 TEST_HEADER include/spdk/nvme_spec.h 00:03:05.733 TEST_HEADER include/spdk/nvmf.h 00:03:05.733 TEST_HEADER include/spdk/nvmf_spec.h 00:03:05.733 TEST_HEADER include/spdk/opal.h 00:03:05.733 CC app/spdk_dd/spdk_dd.o 00:03:05.733 TEST_HEADER include/spdk/nvmf_transport.h 00:03:05.734 TEST_HEADER include/spdk/pci_ids.h 00:03:05.734 TEST_HEADER include/spdk/opal_spec.h 00:03:05.734 TEST_HEADER include/spdk/queue.h 00:03:05.734 TEST_HEADER include/spdk/pipe.h 00:03:05.734 TEST_HEADER include/spdk/reduce.h 00:03:05.734 TEST_HEADER include/spdk/rpc.h 00:03:05.734 TEST_HEADER include/spdk/scheduler.h 00:03:05.734 TEST_HEADER include/spdk/scsi_spec.h 00:03:05.734 CC app/iscsi_tgt/iscsi_tgt.o 00:03:05.734 TEST_HEADER include/spdk/scsi.h 00:03:05.734 TEST_HEADER include/spdk/sock.h 00:03:05.734 TEST_HEADER include/spdk/string.h 00:03:05.734 TEST_HEADER include/spdk/stdinc.h 00:03:05.734 TEST_HEADER include/spdk/thread.h 00:03:05.734 TEST_HEADER include/spdk/trace.h 00:03:05.734 TEST_HEADER include/spdk/trace_parser.h 00:03:05.734 TEST_HEADER include/spdk/tree.h 00:03:05.734 TEST_HEADER include/spdk/ublk.h 00:03:05.734 TEST_HEADER include/spdk/uuid.h 00:03:05.734 TEST_HEADER include/spdk/util.h 00:03:05.734 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:05.734 TEST_HEADER include/spdk/version.h 00:03:05.734 TEST_HEADER include/spdk/vmd.h 00:03:05.734 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:05.734 TEST_HEADER include/spdk/vhost.h 00:03:05.734 TEST_HEADER include/spdk/xor.h 00:03:05.734 TEST_HEADER include/spdk/zipf.h 00:03:05.734 CXX test/cpp_headers/accel.o 00:03:05.734 CXX test/cpp_headers/accel_module.o 00:03:05.734 CXX test/cpp_headers/assert.o 00:03:05.734 CXX test/cpp_headers/barrier.o 00:03:05.734 CC app/spdk_tgt/spdk_tgt.o 00:03:05.734 CXX test/cpp_headers/base64.o 00:03:05.734 CXX test/cpp_headers/bdev.o 00:03:05.734 CXX test/cpp_headers/bdev_module.o 00:03:05.734 CXX test/cpp_headers/bdev_zone.o 00:03:05.734 CXX test/cpp_headers/bit_pool.o 00:03:05.734 CXX test/cpp_headers/blob_bdev.o 00:03:05.734 CXX test/cpp_headers/bit_array.o 00:03:05.734 CXX test/cpp_headers/blobfs_bdev.o 00:03:05.734 CXX test/cpp_headers/blob.o 00:03:05.734 CXX test/cpp_headers/blobfs.o 00:03:05.734 CXX test/cpp_headers/conf.o 00:03:05.734 CXX test/cpp_headers/cpuset.o 00:03:05.734 CXX test/cpp_headers/crc16.o 00:03:05.734 CXX test/cpp_headers/config.o 00:03:05.734 CXX test/cpp_headers/crc32.o 00:03:05.734 CXX test/cpp_headers/crc64.o 00:03:05.734 CXX test/cpp_headers/endian.o 00:03:05.734 CXX test/cpp_headers/dma.o 00:03:05.734 CXX test/cpp_headers/dif.o 00:03:05.734 CXX test/cpp_headers/event.o 00:03:05.734 CXX test/cpp_headers/env.o 00:03:05.734 CXX test/cpp_headers/env_dpdk.o 00:03:05.734 CXX test/cpp_headers/fd.o 00:03:05.734 CXX test/cpp_headers/ftl.o 00:03:05.734 CXX test/cpp_headers/fd_group.o 00:03:05.734 CXX test/cpp_headers/file.o 00:03:05.734 CXX test/cpp_headers/gpt_spec.o 00:03:05.734 CXX test/cpp_headers/hexlify.o 00:03:05.734 CXX test/cpp_headers/idxd.o 00:03:05.734 CXX test/cpp_headers/idxd_spec.o 00:03:05.734 CXX test/cpp_headers/init.o 00:03:05.734 CXX test/cpp_headers/histogram_data.o 00:03:05.734 CXX test/cpp_headers/ioat.o 00:03:05.734 CXX test/cpp_headers/iscsi_spec.o 00:03:05.734 CXX test/cpp_headers/ioat_spec.o 00:03:05.734 CXX test/cpp_headers/json.o 00:03:05.734 CXX test/cpp_headers/jsonrpc.o 00:03:05.734 CXX test/cpp_headers/keyring_module.o 00:03:05.734 CXX test/cpp_headers/keyring.o 00:03:05.734 CXX test/cpp_headers/log.o 00:03:05.734 CXX test/cpp_headers/lvol.o 00:03:05.734 CXX test/cpp_headers/likely.o 00:03:05.734 CXX test/cpp_headers/memory.o 00:03:05.734 CXX test/cpp_headers/mmio.o 00:03:05.734 CXX test/cpp_headers/notify.o 00:03:05.734 CXX test/cpp_headers/nbd.o 00:03:05.734 CXX test/cpp_headers/nvme.o 00:03:05.734 CXX test/cpp_headers/nvme_ocssd.o 00:03:05.734 CXX test/cpp_headers/nvme_intel.o 00:03:05.734 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:05.734 CXX test/cpp_headers/nvme_zns.o 00:03:05.734 CXX test/cpp_headers/nvmf_cmd.o 00:03:05.734 CXX test/cpp_headers/nvme_spec.o 00:03:05.734 CXX test/cpp_headers/nvmf.o 00:03:05.734 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:05.734 CXX test/cpp_headers/nvmf_spec.o 00:03:05.734 CXX test/cpp_headers/nvmf_transport.o 00:03:05.734 CXX test/cpp_headers/opal.o 00:03:05.734 CXX test/cpp_headers/opal_spec.o 00:03:05.734 CXX test/cpp_headers/pci_ids.o 00:03:05.734 CXX test/cpp_headers/queue.o 00:03:05.734 CXX test/cpp_headers/pipe.o 00:03:05.734 CXX test/cpp_headers/reduce.o 00:03:05.734 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.734 CXX test/cpp_headers/rpc.o 00:03:05.734 CC test/thread/poller_perf/poller_perf.o 00:03:05.734 CC examples/util/zipf/zipf.o 00:03:05.734 CC examples/ioat/perf/perf.o 00:03:05.734 CC test/env/memory/memory_ut.o 00:03:05.734 CC test/env/pci/pci_ut.o 00:03:05.734 CC examples/ioat/verify/verify.o 00:03:05.734 CXX test/cpp_headers/scheduler.o 00:03:05.734 CC test/app/jsoncat/jsoncat.o 00:03:05.734 CC test/env/vtophys/vtophys.o 00:03:05.734 CC test/app/stub/stub.o 00:03:06.060 CC app/fio/nvme/fio_plugin.o 00:03:06.060 CC test/dma/test_dma/test_dma.o 00:03:06.060 CC app/fio/bdev/fio_plugin.o 00:03:06.060 CC test/app/histogram_perf/histogram_perf.o 00:03:06.060 LINK spdk_lspci 00:03:06.060 CC test/app/bdev_svc/bdev_svc.o 00:03:06.336 LINK spdk_trace_record 00:03:06.336 LINK nvmf_tgt 00:03:06.336 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:06.336 LINK rpc_client_test 00:03:06.336 LINK spdk_nvme_discover 00:03:06.336 LINK interrupt_tgt 00:03:06.336 CC test/env/mem_callbacks/mem_callbacks.o 00:03:06.336 CXX test/cpp_headers/scsi.o 00:03:06.336 CXX test/cpp_headers/scsi_spec.o 00:03:06.336 CXX test/cpp_headers/sock.o 00:03:06.336 CXX test/cpp_headers/stdinc.o 00:03:06.336 LINK zipf 00:03:06.336 LINK env_dpdk_post_init 00:03:06.336 CXX test/cpp_headers/string.o 00:03:06.336 CXX test/cpp_headers/trace.o 00:03:06.336 CXX test/cpp_headers/trace_parser.o 00:03:06.336 CXX test/cpp_headers/thread.o 00:03:06.336 CXX test/cpp_headers/tree.o 00:03:06.336 CXX test/cpp_headers/ublk.o 00:03:06.336 CXX test/cpp_headers/util.o 00:03:06.336 CXX test/cpp_headers/uuid.o 00:03:06.336 CXX test/cpp_headers/version.o 00:03:06.336 CXX test/cpp_headers/vfio_user_pci.o 00:03:06.336 CXX test/cpp_headers/vfio_user_spec.o 00:03:06.336 CXX test/cpp_headers/vhost.o 00:03:06.336 CXX test/cpp_headers/vmd.o 00:03:06.336 CXX test/cpp_headers/xor.o 00:03:06.336 CXX test/cpp_headers/zipf.o 00:03:06.336 LINK iscsi_tgt 00:03:06.336 LINK stub 00:03:06.336 LINK jsoncat 00:03:06.336 LINK poller_perf 00:03:06.336 LINK ioat_perf 00:03:06.336 LINK spdk_tgt 00:03:06.336 LINK vtophys 00:03:06.336 LINK spdk_dd 00:03:06.336 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:06.336 LINK spdk_trace 00:03:06.336 LINK histogram_perf 00:03:06.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:06.336 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:06.336 LINK bdev_svc 00:03:06.595 LINK verify 00:03:06.595 LINK pci_ut 00:03:06.595 LINK test_dma 00:03:06.595 LINK spdk_bdev 00:03:06.853 CC examples/idxd/perf/perf.o 00:03:06.853 CC examples/vmd/lsvmd/lsvmd.o 00:03:06.853 CC examples/sock/hello_world/hello_sock.o 00:03:06.853 LINK spdk_nvme_identify 00:03:06.853 CC examples/vmd/led/led.o 00:03:06.853 CC test/event/event_perf/event_perf.o 00:03:06.853 CC examples/thread/thread/thread_ex.o 00:03:06.853 LINK nvme_fuzz 00:03:06.853 CC test/event/reactor_perf/reactor_perf.o 00:03:06.853 CC app/vhost/vhost.o 00:03:06.853 CC test/event/reactor/reactor.o 00:03:06.853 LINK spdk_nvme_perf 00:03:06.853 CC test/event/app_repeat/app_repeat.o 00:03:06.853 LINK spdk_nvme 00:03:06.853 CC test/event/scheduler/scheduler.o 00:03:06.853 LINK vhost_fuzz 00:03:06.853 LINK spdk_top 00:03:06.853 LINK lsvmd 00:03:06.853 LINK event_perf 00:03:06.853 LINK mem_callbacks 00:03:06.853 LINK led 00:03:06.853 LINK reactor_perf 00:03:06.853 LINK reactor 00:03:07.113 LINK app_repeat 00:03:07.113 LINK hello_sock 00:03:07.113 LINK vhost 00:03:07.113 LINK thread 00:03:07.113 LINK idxd_perf 00:03:07.113 CC test/nvme/fused_ordering/fused_ordering.o 00:03:07.113 CC test/nvme/reset/reset.o 00:03:07.113 CC test/nvme/reserve/reserve.o 00:03:07.113 CC test/nvme/simple_copy/simple_copy.o 00:03:07.113 CC test/nvme/cuse/cuse.o 00:03:07.113 CC test/nvme/sgl/sgl.o 00:03:07.113 CC test/nvme/connect_stress/connect_stress.o 00:03:07.113 CC test/nvme/compliance/nvme_compliance.o 00:03:07.113 CC test/nvme/aer/aer.o 00:03:07.113 CC test/nvme/overhead/overhead.o 00:03:07.113 CC test/nvme/boot_partition/boot_partition.o 00:03:07.113 CC test/nvme/err_injection/err_injection.o 00:03:07.113 CC test/nvme/fdp/fdp.o 00:03:07.113 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:07.113 CC test/nvme/e2edp/nvme_dp.o 00:03:07.113 CC test/nvme/startup/startup.o 00:03:07.113 LINK scheduler 00:03:07.113 CC test/accel/dif/dif.o 00:03:07.113 CC test/blobfs/mkfs/mkfs.o 00:03:07.113 LINK memory_ut 00:03:07.113 CC test/lvol/esnap/esnap.o 00:03:07.113 LINK boot_partition 00:03:07.113 LINK connect_stress 00:03:07.372 LINK fused_ordering 00:03:07.372 LINK startup 00:03:07.372 LINK simple_copy 00:03:07.372 LINK reserve 00:03:07.372 LINK doorbell_aers 00:03:07.372 LINK err_injection 00:03:07.372 LINK reset 00:03:07.372 LINK sgl 00:03:07.372 LINK mkfs 00:03:07.372 LINK aer 00:03:07.372 LINK overhead 00:03:07.372 LINK nvme_dp 00:03:07.372 LINK nvme_compliance 00:03:07.372 LINK fdp 00:03:07.372 CC examples/nvme/arbitration/arbitration.o 00:03:07.372 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:07.372 CC examples/nvme/hotplug/hotplug.o 00:03:07.372 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:07.372 CC examples/nvme/reconnect/reconnect.o 00:03:07.372 CC examples/nvme/hello_world/hello_world.o 00:03:07.372 CC examples/nvme/abort/abort.o 00:03:07.372 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:07.372 LINK dif 00:03:07.630 CC examples/accel/perf/accel_perf.o 00:03:07.630 CC examples/blob/cli/blobcli.o 00:03:07.630 CC examples/blob/hello_world/hello_blob.o 00:03:07.630 LINK pmr_persistence 00:03:07.630 LINK cmb_copy 00:03:07.630 LINK hotplug 00:03:07.630 LINK arbitration 00:03:07.630 LINK hello_world 00:03:07.630 LINK reconnect 00:03:07.630 LINK hello_blob 00:03:07.630 LINK abort 00:03:07.630 LINK iscsi_fuzz 00:03:07.889 LINK nvme_manage 00:03:07.889 LINK accel_perf 00:03:07.889 LINK blobcli 00:03:07.889 CC test/bdev/bdevio/bdevio.o 00:03:08.148 LINK cuse 00:03:08.148 LINK bdevio 00:03:08.408 CC examples/bdev/hello_world/hello_bdev.o 00:03:08.408 CC examples/bdev/bdevperf/bdevperf.o 00:03:08.668 LINK hello_bdev 00:03:08.927 LINK bdevperf 00:03:09.493 CC examples/nvmf/nvmf/nvmf.o 00:03:09.751 LINK nvmf 00:03:10.685 LINK esnap 00:03:10.943 00:03:10.943 real 0m34.125s 00:03:10.943 user 5m9.934s 00:03:10.943 sys 2m27.725s 00:03:10.943 11:53:00 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:10.943 11:53:00 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.943 ************************************ 00:03:10.943 END TEST make 00:03:10.943 ************************************ 00:03:10.943 11:53:00 -- common/autotest_common.sh@1142 -- $ return 0 00:03:10.943 11:53:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.943 11:53:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.943 11:53:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.943 11:53:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.943 11:53:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.943 11:53:00 -- pm/common@44 -- $ pid=822007 00:03:10.943 11:53:00 -- pm/common@50 -- $ kill -TERM 822007 00:03:10.943 11:53:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.943 11:53:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.943 11:53:00 -- pm/common@44 -- $ pid=822008 00:03:10.943 11:53:00 -- pm/common@50 -- $ kill -TERM 822008 00:03:10.943 11:53:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.943 11:53:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:10.943 11:53:00 -- pm/common@44 -- $ pid=822010 00:03:10.943 11:53:00 -- pm/common@50 -- $ kill -TERM 822010 00:03:10.943 11:53:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.943 11:53:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:10.943 11:53:00 -- pm/common@44 -- $ pid=822034 00:03:10.943 11:53:00 -- pm/common@50 -- $ sudo -E kill -TERM 822034 00:03:11.202 11:53:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:11.202 11:53:00 -- nvmf/common.sh@7 -- # uname -s 00:03:11.202 11:53:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.202 11:53:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.202 11:53:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.202 11:53:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.202 11:53:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.202 11:53:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.202 11:53:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.202 11:53:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.202 11:53:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.202 11:53:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.202 11:53:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:11.202 11:53:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:11.202 11:53:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.202 11:53:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.202 11:53:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:11.202 11:53:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.202 11:53:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:11.202 11:53:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.202 11:53:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.202 11:53:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.202 11:53:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.202 11:53:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.202 11:53:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.202 11:53:00 -- paths/export.sh@5 -- # export PATH 00:03:11.202 11:53:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.202 11:53:00 -- nvmf/common.sh@47 -- # : 0 00:03:11.202 11:53:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:11.202 11:53:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:11.202 11:53:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.202 11:53:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.202 11:53:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.202 11:53:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:11.202 11:53:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:11.202 11:53:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:11.202 11:53:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.202 11:53:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.202 11:53:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.202 11:53:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.202 11:53:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.202 11:53:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.202 11:53:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:11.202 11:53:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.202 11:53:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.202 11:53:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.202 11:53:00 -- spdk/autotest.sh@48 -- # udevadm_pid=895326 00:03:11.202 11:53:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.202 11:53:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.202 11:53:00 -- pm/common@17 -- # local monitor 00:03:11.202 11:53:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.202 11:53:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.202 11:53:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.202 11:53:01 -- pm/common@21 -- # date +%s 00:03:11.202 11:53:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.202 11:53:01 -- pm/common@21 -- # date +%s 00:03:11.202 11:53:01 -- pm/common@25 -- # sleep 1 00:03:11.202 11:53:01 -- pm/common@21 -- # date +%s 00:03:11.202 11:53:01 -- pm/common@21 -- # date +%s 00:03:11.202 11:53:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721037181 00:03:11.202 11:53:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721037181 00:03:11.202 11:53:01 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721037181 00:03:11.202 11:53:01 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721037181 00:03:11.202 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721037181_collect-vmstat.pm.log 00:03:11.202 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721037181_collect-cpu-load.pm.log 00:03:11.202 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721037181_collect-cpu-temp.pm.log 00:03:11.202 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721037181_collect-bmc-pm.bmc.pm.log 00:03:12.135 11:53:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.135 11:53:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.135 11:53:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:12.135 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:03:12.135 11:53:02 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.135 11:53:02 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:12.135 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:03:12.135 11:53:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:12.135 11:53:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.135 11:53:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.135 11:53:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:12.135 11:53:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:12.136 11:53:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.136 11:53:02 -- common/autotest_common.sh@1455 -- # uname 00:03:12.136 11:53:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:12.136 11:53:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.136 11:53:02 -- common/autotest_common.sh@1475 -- # uname 00:03:12.136 11:53:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:12.136 11:53:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:12.136 11:53:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:12.136 11:53:02 -- spdk/autotest.sh@72 -- # hash lcov 00:03:12.136 11:53:02 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:12.136 11:53:02 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:12.136 --rc lcov_branch_coverage=1 00:03:12.136 --rc lcov_function_coverage=1 00:03:12.136 --rc genhtml_branch_coverage=1 00:03:12.136 --rc genhtml_function_coverage=1 00:03:12.136 --rc genhtml_legend=1 00:03:12.136 --rc geninfo_all_blocks=1 00:03:12.136 ' 00:03:12.136 11:53:02 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:12.136 --rc lcov_branch_coverage=1 00:03:12.136 --rc lcov_function_coverage=1 00:03:12.136 --rc genhtml_branch_coverage=1 00:03:12.136 --rc genhtml_function_coverage=1 00:03:12.136 --rc genhtml_legend=1 00:03:12.136 --rc geninfo_all_blocks=1 00:03:12.136 ' 00:03:12.136 11:53:02 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:12.136 --rc lcov_branch_coverage=1 00:03:12.136 --rc lcov_function_coverage=1 00:03:12.136 --rc genhtml_branch_coverage=1 00:03:12.136 --rc genhtml_function_coverage=1 00:03:12.136 --rc genhtml_legend=1 00:03:12.136 --rc geninfo_all_blocks=1 00:03:12.136 --no-external' 00:03:12.136 11:53:02 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:12.136 --rc lcov_branch_coverage=1 00:03:12.136 --rc lcov_function_coverage=1 00:03:12.136 --rc genhtml_branch_coverage=1 00:03:12.136 --rc genhtml_function_coverage=1 00:03:12.136 --rc genhtml_legend=1 00:03:12.136 --rc geninfo_all_blocks=1 00:03:12.136 --no-external' 00:03:12.136 11:53:02 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:12.395 lcov: LCOV version 1.14 00:03:12.395 11:53:02 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:24.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:24.616 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:34.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:34.584 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:34.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:34.585 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:37.226 11:53:26 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:37.226 11:53:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:37.226 11:53:26 -- common/autotest_common.sh@10 -- # set +x 00:03:37.226 11:53:26 -- spdk/autotest.sh@91 -- # rm -f 00:03:37.226 11:53:26 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.765 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:39.765 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:39.766 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:40.025 11:53:29 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:40.025 11:53:29 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:40.025 11:53:29 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:40.025 11:53:29 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:40.025 11:53:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.025 11:53:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:40.025 11:53:29 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:40.025 11:53:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.025 11:53:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.025 11:53:29 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:40.025 11:53:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:40.025 11:53:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:40.025 11:53:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:40.025 11:53:29 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:40.025 11:53:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:40.025 No valid GPT data, bailing 00:03:40.025 11:53:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:40.025 11:53:29 -- scripts/common.sh@391 -- # pt= 00:03:40.025 11:53:29 -- scripts/common.sh@392 -- # return 1 00:03:40.025 11:53:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:40.025 1+0 records in 00:03:40.025 1+0 records out 00:03:40.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00202257 s, 518 MB/s 00:03:40.025 11:53:29 -- spdk/autotest.sh@118 -- # sync 00:03:40.025 11:53:29 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:40.025 11:53:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:40.025 11:53:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.303 11:53:35 -- spdk/autotest.sh@124 -- # uname -s 00:03:45.303 11:53:35 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:45.303 11:53:35 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:45.303 11:53:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.303 11:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.303 11:53:35 -- common/autotest_common.sh@10 -- # set +x 00:03:45.303 ************************************ 00:03:45.303 START TEST setup.sh 00:03:45.303 ************************************ 00:03:45.303 11:53:35 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:45.303 * Looking for test storage... 00:03:45.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.303 11:53:35 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:45.303 11:53:35 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:45.303 11:53:35 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:45.303 11:53:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.303 11:53:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.303 11:53:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.303 ************************************ 00:03:45.303 START TEST acl 00:03:45.303 ************************************ 00:03:45.303 11:53:35 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:45.562 * Looking for test storage... 00:03:45.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.562 11:53:35 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:45.562 11:53:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:45.562 11:53:35 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:45.562 11:53:35 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:45.562 11:53:35 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:45.562 11:53:35 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:45.562 11:53:35 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:45.562 11:53:35 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:45.562 11:53:35 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:45.562 11:53:35 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:45.562 11:53:35 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:45.562 11:53:35 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:45.562 11:53:35 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:45.562 11:53:35 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:45.562 11:53:35 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.562 11:53:35 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.962 11:53:38 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:48.962 11:53:38 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:48.962 11:53:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.962 11:53:38 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:48.962 11:53:38 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.962 11:53:38 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:51.496 Hugepages 00:03:51.496 node hugesize free / total 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.496 00:03:51.496 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:51.496 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:51.497 11:53:41 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:51.497 11:53:41 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.497 11:53:41 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.497 11:53:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.497 ************************************ 00:03:51.497 START TEST denied 00:03:51.497 ************************************ 00:03:51.497 11:53:41 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:51.497 11:53:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:51.497 11:53:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:51.497 11:53:41 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:51.497 11:53:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.497 11:53:41 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.786 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.786 11:53:44 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.981 00:03:58.981 real 0m7.173s 00:03:58.981 user 0m2.332s 00:03:58.981 sys 0m4.103s 00:03:58.981 11:53:48 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.981 11:53:48 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:58.981 ************************************ 00:03:58.981 END TEST denied 00:03:58.981 ************************************ 00:03:58.981 11:53:48 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:58.981 11:53:48 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:58.981 11:53:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.981 11:53:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.981 11:53:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.981 ************************************ 00:03:58.981 START TEST allowed 00:03:58.981 ************************************ 00:03:58.981 11:53:48 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:58.981 11:53:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.981 11:53:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:58.981 11:53:48 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:58.981 11:53:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.981 11:53:48 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.178 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.178 11:53:52 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:03.178 11:53:52 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:03.178 11:53:52 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:03.178 11:53:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.178 11:53:52 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.467 00:04:06.467 real 0m7.094s 00:04:06.467 user 0m2.262s 00:04:06.467 sys 0m3.979s 00:04:06.467 11:53:55 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.467 11:53:55 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:06.467 ************************************ 00:04:06.467 END TEST allowed 00:04:06.467 ************************************ 00:04:06.467 11:53:55 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:06.467 00:04:06.467 real 0m20.548s 00:04:06.467 user 0m6.950s 00:04:06.467 sys 0m12.227s 00:04:06.467 11:53:55 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.467 11:53:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:06.467 ************************************ 00:04:06.467 END TEST acl 00:04:06.467 ************************************ 00:04:06.467 11:53:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:06.467 11:53:55 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:06.467 11:53:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.467 11:53:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.467 11:53:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.467 ************************************ 00:04:06.467 START TEST hugepages 00:04:06.467 ************************************ 00:04:06.467 11:53:55 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:06.467 * Looking for test storage... 00:04:06.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 171740064 kB' 'MemAvailable: 174617836 kB' 'Buffers: 3896 kB' 'Cached: 11765136 kB' 'SwapCached: 0 kB' 'Active: 8773308 kB' 'Inactive: 3507356 kB' 'Active(anon): 8381300 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514848 kB' 'Mapped: 215852 kB' 'Shmem: 7869668 kB' 'KReclaimable: 245720 kB' 'Slab: 818072 kB' 'SReclaimable: 245720 kB' 'SUnreclaim: 572352 kB' 'KernelStack: 20432 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 9895540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.467 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:06.468 11:53:55 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.468 11:53:56 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:06.468 11:53:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.469 11:53:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.469 11:53:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.469 ************************************ 00:04:06.469 START TEST default_setup 00:04:06.469 ************************************ 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.469 11:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.006 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.006 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.265 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.829 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173752844 kB' 'MemAvailable: 176630596 kB' 'Buffers: 3896 kB' 'Cached: 11765252 kB' 'SwapCached: 0 kB' 'Active: 8791000 kB' 'Inactive: 3507356 kB' 'Active(anon): 8398992 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532044 kB' 'Mapped: 215872 kB' 'Shmem: 7869784 kB' 'KReclaimable: 245680 kB' 'Slab: 816880 kB' 'SReclaimable: 245680 kB' 'SUnreclaim: 571200 kB' 'KernelStack: 20704 kB' 'PageTables: 9504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9915280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.092 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173755516 kB' 'MemAvailable: 176633268 kB' 'Buffers: 3896 kB' 'Cached: 11765252 kB' 'SwapCached: 0 kB' 'Active: 8790488 kB' 'Inactive: 3507356 kB' 'Active(anon): 8398480 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531500 kB' 'Mapped: 215872 kB' 'Shmem: 7869784 kB' 'KReclaimable: 245680 kB' 'Slab: 816940 kB' 'SReclaimable: 245680 kB' 'SUnreclaim: 571260 kB' 'KernelStack: 20800 kB' 'PageTables: 9788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9915048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.093 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.094 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173755784 kB' 'MemAvailable: 176633536 kB' 'Buffers: 3896 kB' 'Cached: 11765272 kB' 'SwapCached: 0 kB' 'Active: 8789816 kB' 'Inactive: 3507356 kB' 'Active(anon): 8397808 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531292 kB' 'Mapped: 215792 kB' 'Shmem: 7869804 kB' 'KReclaimable: 245680 kB' 'Slab: 816852 kB' 'SReclaimable: 245680 kB' 'SUnreclaim: 571172 kB' 'KernelStack: 20768 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9915320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.095 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.096 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.097 nr_hugepages=1024 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.097 resv_hugepages=0 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.097 surplus_hugepages=0 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.097 anon_hugepages=0 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.097 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173755716 kB' 'MemAvailable: 176633468 kB' 'Buffers: 3896 kB' 'Cached: 11765272 kB' 'SwapCached: 0 kB' 'Active: 8789776 kB' 'Inactive: 3507356 kB' 'Active(anon): 8397768 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531256 kB' 'Mapped: 215792 kB' 'Shmem: 7869804 kB' 'KReclaimable: 245680 kB' 'Slab: 816852 kB' 'SReclaimable: 245680 kB' 'SUnreclaim: 571172 kB' 'KernelStack: 20672 kB' 'PageTables: 9708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9913844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.098 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:53:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.099 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84758584 kB' 'MemUsed: 12904100 kB' 'SwapCached: 0 kB' 'Active: 5759832 kB' 'Inactive: 3337684 kB' 'Active(anon): 5602292 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8891552 kB' 'Mapped: 79764 kB' 'AnonPages: 209212 kB' 'Shmem: 5396328 kB' 'KernelStack: 11976 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130104 kB' 'Slab: 400784 kB' 'SReclaimable: 130104 kB' 'SUnreclaim: 270680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.100 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.101 node0=1024 expecting 1024 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.101 00:04:10.101 real 0m3.990s 00:04:10.101 user 0m1.316s 00:04:10.101 sys 0m1.964s 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.101 11:54:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:10.101 ************************************ 00:04:10.101 END TEST default_setup 00:04:10.101 ************************************ 00:04:10.101 11:54:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:10.101 11:54:00 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:10.101 11:54:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.101 11:54:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.101 11:54:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:10.360 ************************************ 00:04:10.360 START TEST per_node_1G_alloc 00:04:10.360 ************************************ 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.360 11:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.894 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.894 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.894 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173703552 kB' 'MemAvailable: 176581288 kB' 'Buffers: 3896 kB' 'Cached: 11765396 kB' 'SwapCached: 0 kB' 'Active: 8798496 kB' 'Inactive: 3507356 kB' 'Active(anon): 8406488 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539320 kB' 'Mapped: 216764 kB' 'Shmem: 7869928 kB' 'KReclaimable: 245648 kB' 'Slab: 817100 kB' 'SReclaimable: 245648 kB' 'SUnreclaim: 571452 kB' 'KernelStack: 20672 kB' 'PageTables: 9392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9922352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315632 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.158 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:13.159 11:54:02 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173703492 kB' 'MemAvailable: 176581228 kB' 'Buffers: 3896 kB' 'Cached: 11765396 kB' 'SwapCached: 0 kB' 'Active: 8797332 kB' 'Inactive: 3507356 kB' 'Active(anon): 8405324 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538752 kB' 'Mapped: 216664 kB' 'Shmem: 7869928 kB' 'KReclaimable: 245648 kB' 'Slab: 817008 kB' 'SReclaimable: 245648 kB' 'SUnreclaim: 571360 kB' 'KernelStack: 20560 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9922372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.159 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173703492 kB' 'MemAvailable: 176581228 kB' 'Buffers: 3896 kB' 'Cached: 11765412 kB' 'SwapCached: 0 kB' 'Active: 8797292 kB' 'Inactive: 3507356 kB' 'Active(anon): 8405284 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538668 kB' 'Mapped: 216664 kB' 'Shmem: 7869944 kB' 'KReclaimable: 245648 kB' 'Slab: 817008 kB' 'SReclaimable: 245648 kB' 'SUnreclaim: 571360 kB' 'KernelStack: 20544 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9922392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.160 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.161 nr_hugepages=1024 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.161 resv_hugepages=0 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.161 surplus_hugepages=0 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.161 anon_hugepages=0 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173703492 kB' 'MemAvailable: 176581228 kB' 'Buffers: 3896 kB' 'Cached: 11765456 kB' 'SwapCached: 0 kB' 'Active: 8797040 kB' 'Inactive: 3507356 kB' 'Active(anon): 8405032 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538368 kB' 'Mapped: 216664 kB' 'Shmem: 7869988 kB' 'KReclaimable: 245648 kB' 'Slab: 817008 kB' 'SReclaimable: 245648 kB' 'SUnreclaim: 571360 kB' 'KernelStack: 20544 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9922416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315600 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.161 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85810600 kB' 'MemUsed: 11852084 kB' 'SwapCached: 0 kB' 'Active: 5759364 kB' 'Inactive: 3337684 kB' 'Active(anon): 5601824 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8891700 kB' 'Mapped: 79760 kB' 'AnonPages: 208624 kB' 'Shmem: 5396476 kB' 'KernelStack: 11960 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130072 kB' 'Slab: 400788 kB' 'SReclaimable: 130072 kB' 'SUnreclaim: 270716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.162 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87891380 kB' 'MemUsed: 5827088 kB' 'SwapCached: 0 kB' 'Active: 3037692 kB' 'Inactive: 169672 kB' 'Active(anon): 2803224 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169672 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2877676 kB' 'Mapped: 136904 kB' 'AnonPages: 329740 kB' 'Shmem: 2473536 kB' 'KernelStack: 8584 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115576 kB' 'Slab: 416220 kB' 'SReclaimable: 115576 kB' 'SUnreclaim: 300644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.163 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:13.422 node0=512 expecting 512 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:13.422 node1=512 expecting 512 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:13.422 00:04:13.422 real 0m3.064s 00:04:13.422 user 0m1.256s 00:04:13.422 sys 0m1.875s 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.422 11:54:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:13.422 ************************************ 00:04:13.422 END TEST per_node_1G_alloc 00:04:13.422 ************************************ 00:04:13.422 11:54:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:13.422 11:54:03 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:13.422 11:54:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.422 11:54:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.422 11:54:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.422 ************************************ 00:04:13.422 START TEST even_2G_alloc 00:04:13.422 ************************************ 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.422 11:54:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.955 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.955 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.955 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.955 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.216 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.216 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173736220 kB' 'MemAvailable: 176613952 kB' 'Buffers: 3896 kB' 'Cached: 11765540 kB' 'SwapCached: 0 kB' 'Active: 8796232 kB' 'Inactive: 3507356 kB' 'Active(anon): 8404224 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537404 kB' 'Mapped: 215776 kB' 'Shmem: 7870072 kB' 'KReclaimable: 245640 kB' 'Slab: 816940 kB' 'SReclaimable: 245640 kB' 'SUnreclaim: 571300 kB' 'KernelStack: 20432 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9922548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315648 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.217 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173736220 kB' 'MemAvailable: 176613952 kB' 'Buffers: 3896 kB' 'Cached: 11765544 kB' 'SwapCached: 0 kB' 'Active: 8795340 kB' 'Inactive: 3507356 kB' 'Active(anon): 8403332 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536564 kB' 'Mapped: 215764 kB' 'Shmem: 7870076 kB' 'KReclaimable: 245640 kB' 'Slab: 816980 kB' 'SReclaimable: 245640 kB' 'SUnreclaim: 571340 kB' 'KernelStack: 20400 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9911116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315552 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.218 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.219 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173736880 kB' 'MemAvailable: 176614612 kB' 'Buffers: 3896 kB' 'Cached: 11765560 kB' 'SwapCached: 0 kB' 'Active: 8795272 kB' 'Inactive: 3507356 kB' 'Active(anon): 8403264 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536436 kB' 'Mapped: 215764 kB' 'Shmem: 7870092 kB' 'KReclaimable: 245640 kB' 'Slab: 816980 kB' 'SReclaimable: 245640 kB' 'SUnreclaim: 571340 kB' 'KernelStack: 20384 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9911136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315536 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.220 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.221 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.484 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.485 nr_hugepages=1024 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.485 resv_hugepages=0 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.485 surplus_hugepages=0 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.485 anon_hugepages=0 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.485 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173735948 kB' 'MemAvailable: 176613680 kB' 'Buffers: 3896 kB' 'Cached: 11765580 kB' 'SwapCached: 0 kB' 'Active: 8795408 kB' 'Inactive: 3507356 kB' 'Active(anon): 8403400 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536524 kB' 'Mapped: 215764 kB' 'Shmem: 7870112 kB' 'KReclaimable: 245640 kB' 'Slab: 816980 kB' 'SReclaimable: 245640 kB' 'SUnreclaim: 571340 kB' 'KernelStack: 20400 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9913044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315520 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.486 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85829648 kB' 'MemUsed: 11833036 kB' 'SwapCached: 0 kB' 'Active: 5757580 kB' 'Inactive: 3337684 kB' 'Active(anon): 5600040 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8891796 kB' 'Mapped: 79612 kB' 'AnonPages: 206048 kB' 'Shmem: 5396572 kB' 'KernelStack: 11832 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130072 kB' 'Slab: 400756 kB' 'SReclaimable: 130072 kB' 'SUnreclaim: 270684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.487 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:16.488 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87906332 kB' 'MemUsed: 5812136 kB' 'SwapCached: 0 kB' 'Active: 3037956 kB' 'Inactive: 169672 kB' 'Active(anon): 2803488 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169672 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2877696 kB' 'Mapped: 136152 kB' 'AnonPages: 330108 kB' 'Shmem: 2473556 kB' 'KernelStack: 8552 kB' 'PageTables: 4768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115568 kB' 'Slab: 416220 kB' 'SReclaimable: 115568 kB' 'SUnreclaim: 300652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.489 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:16.490 node0=512 expecting 512 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:16.490 node1=512 expecting 512 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:16.490 00:04:16.490 real 0m3.066s 00:04:16.490 user 0m1.227s 00:04:16.490 sys 0m1.908s 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.490 11:54:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:16.490 ************************************ 00:04:16.490 END TEST even_2G_alloc 00:04:16.490 ************************************ 00:04:16.490 11:54:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:16.490 11:54:06 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:16.490 11:54:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.490 11:54:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.490 11:54:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.490 ************************************ 00:04:16.490 START TEST odd_alloc 00:04:16.490 ************************************ 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.490 11:54:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.788 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.788 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.788 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.789 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173754520 kB' 'MemAvailable: 176632252 kB' 'Buffers: 3896 kB' 'Cached: 11765700 kB' 'SwapCached: 0 kB' 'Active: 8792760 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400752 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533816 kB' 'Mapped: 214792 kB' 'Shmem: 7870232 kB' 'KReclaimable: 245640 kB' 'Slab: 816892 kB' 'SReclaimable: 245640 kB' 'SUnreclaim: 571252 kB' 'KernelStack: 20592 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9906328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.789 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173753144 kB' 'MemAvailable: 176630876 kB' 'Buffers: 3896 kB' 'Cached: 11765704 kB' 'SwapCached: 0 kB' 'Active: 8793408 kB' 'Inactive: 3507356 kB' 'Active(anon): 8401400 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534408 kB' 'Mapped: 215256 kB' 'Shmem: 7870236 kB' 'KReclaimable: 245640 kB' 'Slab: 816988 kB' 'SReclaimable: 245640 kB' 'SUnreclaim: 571348 kB' 'KernelStack: 20512 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9910608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.790 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.791 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173739752 kB' 'MemAvailable: 176617484 kB' 'Buffers: 3896 kB' 'Cached: 11765720 kB' 'SwapCached: 0 kB' 'Active: 8801944 kB' 'Inactive: 3507356 kB' 'Active(anon): 8409936 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543096 kB' 'Mapped: 215552 kB' 'Shmem: 7870252 kB' 'KReclaimable: 245640 kB' 'Slab: 816980 kB' 'SReclaimable: 245640 kB' 'SUnreclaim: 571340 kB' 'KernelStack: 20512 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9918968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315524 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.792 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.793 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:19.794 nr_hugepages=1025 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.794 resv_hugepages=0 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.794 surplus_hugepages=0 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.794 anon_hugepages=0 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173744812 kB' 'MemAvailable: 176622544 kB' 'Buffers: 3896 kB' 'Cached: 11765756 kB' 'SwapCached: 0 kB' 'Active: 8796432 kB' 'Inactive: 3507356 kB' 'Active(anon): 8404424 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537468 kB' 'Mapped: 215648 kB' 'Shmem: 7870288 kB' 'KReclaimable: 245640 kB' 'Slab: 816980 kB' 'SReclaimable: 245640 kB' 'SUnreclaim: 571340 kB' 'KernelStack: 20480 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 9912872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315520 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.794 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.795 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85837672 kB' 'MemUsed: 11825012 kB' 'SwapCached: 0 kB' 'Active: 5759124 kB' 'Inactive: 3337684 kB' 'Active(anon): 5601584 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8891940 kB' 'Mapped: 79476 kB' 'AnonPages: 208056 kB' 'Shmem: 5396716 kB' 'KernelStack: 11960 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130072 kB' 'Slab: 400896 kB' 'SReclaimable: 130072 kB' 'SUnreclaim: 270824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.796 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:19.797 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87914996 kB' 'MemUsed: 5803472 kB' 'SwapCached: 0 kB' 'Active: 3033420 kB' 'Inactive: 169672 kB' 'Active(anon): 2798952 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169672 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2877716 kB' 'Mapped: 135276 kB' 'AnonPages: 325660 kB' 'Shmem: 2473576 kB' 'KernelStack: 8728 kB' 'PageTables: 4928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115568 kB' 'Slab: 416084 kB' 'SReclaimable: 115568 kB' 'SUnreclaim: 300516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.798 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:19.799 node0=512 expecting 513 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:19.799 node1=513 expecting 512 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:19.799 00:04:19.799 real 0m3.095s 00:04:19.799 user 0m1.245s 00:04:19.799 sys 0m1.920s 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.799 11:54:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.799 ************************************ 00:04:19.799 END TEST odd_alloc 00:04:19.799 ************************************ 00:04:19.799 11:54:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:19.799 11:54:09 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:19.799 11:54:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.799 11:54:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.799 11:54:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.799 ************************************ 00:04:19.799 START TEST custom_alloc 00:04:19.799 ************************************ 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.799 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.800 11:54:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.338 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.338 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.338 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.338 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.338 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.338 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.338 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.338 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.339 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.339 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.339 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.339 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.339 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.339 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.339 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.339 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.339 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172695336 kB' 'MemAvailable: 175573064 kB' 'Buffers: 3896 kB' 'Cached: 11765856 kB' 'SwapCached: 0 kB' 'Active: 8792076 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400068 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532612 kB' 'Mapped: 214868 kB' 'Shmem: 7870388 kB' 'KReclaimable: 245632 kB' 'Slab: 817080 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 571448 kB' 'KernelStack: 20432 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9906304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.606 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.607 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172698968 kB' 'MemAvailable: 175576696 kB' 'Buffers: 3896 kB' 'Cached: 11765860 kB' 'SwapCached: 0 kB' 'Active: 8791696 kB' 'Inactive: 3507356 kB' 'Active(anon): 8399688 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532812 kB' 'Mapped: 214760 kB' 'Shmem: 7870392 kB' 'KReclaimable: 245632 kB' 'Slab: 817088 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 571456 kB' 'KernelStack: 20352 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9906320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.608 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172698732 kB' 'MemAvailable: 175576460 kB' 'Buffers: 3896 kB' 'Cached: 11765864 kB' 'SwapCached: 0 kB' 'Active: 8792120 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400112 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532664 kB' 'Mapped: 214760 kB' 'Shmem: 7870392 kB' 'KReclaimable: 245632 kB' 'Slab: 817072 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 571440 kB' 'KernelStack: 20464 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9906344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.609 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.610 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:22.611 nr_hugepages=1536 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.611 resv_hugepages=0 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.611 surplus_hugepages=0 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.611 anon_hugepages=0 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.611 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 172699664 kB' 'MemAvailable: 175577392 kB' 'Buffers: 3896 kB' 'Cached: 11765892 kB' 'SwapCached: 0 kB' 'Active: 8792484 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400476 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532852 kB' 'Mapped: 214760 kB' 'Shmem: 7870424 kB' 'KReclaimable: 245632 kB' 'Slab: 817072 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 571440 kB' 'KernelStack: 20560 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 9904868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.612 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.613 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85835052 kB' 'MemUsed: 11827632 kB' 'SwapCached: 0 kB' 'Active: 5759180 kB' 'Inactive: 3337684 kB' 'Active(anon): 5601640 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8892124 kB' 'Mapped: 79484 kB' 'AnonPages: 208004 kB' 'Shmem: 5396900 kB' 'KernelStack: 11928 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130064 kB' 'Slab: 401012 kB' 'SReclaimable: 130064 kB' 'SUnreclaim: 270948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.614 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 86865616 kB' 'MemUsed: 6852852 kB' 'SwapCached: 0 kB' 'Active: 3036076 kB' 'Inactive: 169672 kB' 'Active(anon): 2801608 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 169672 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2877692 kB' 'Mapped: 135780 kB' 'AnonPages: 328088 kB' 'Shmem: 2473552 kB' 'KernelStack: 8664 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115568 kB' 'Slab: 416052 kB' 'SReclaimable: 115568 kB' 'SUnreclaim: 300484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.615 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.616 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.877 node0=512 expecting 512 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:22.877 node1=1024 expecting 1024 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:22.877 00:04:22.877 real 0m3.068s 00:04:22.877 user 0m1.211s 00:04:22.877 sys 0m1.912s 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.877 11:54:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.877 ************************************ 00:04:22.877 END TEST custom_alloc 00:04:22.877 ************************************ 00:04:22.877 11:54:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:22.877 11:54:12 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:22.877 11:54:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.877 11:54:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.877 11:54:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.877 ************************************ 00:04:22.877 START TEST no_shrink_alloc 00:04:22.878 ************************************ 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.878 11:54:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.477 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.477 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.477 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.742 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173783944 kB' 'MemAvailable: 176661672 kB' 'Buffers: 3896 kB' 'Cached: 11766008 kB' 'SwapCached: 0 kB' 'Active: 8792804 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400796 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533444 kB' 'Mapped: 214800 kB' 'Shmem: 7870540 kB' 'KReclaimable: 245632 kB' 'Slab: 817764 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 572132 kB' 'KernelStack: 20736 kB' 'PageTables: 9692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9906808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315756 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.743 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.744 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173783796 kB' 'MemAvailable: 176661524 kB' 'Buffers: 3896 kB' 'Cached: 11766012 kB' 'SwapCached: 0 kB' 'Active: 8793404 kB' 'Inactive: 3507356 kB' 'Active(anon): 8401396 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534104 kB' 'Mapped: 214800 kB' 'Shmem: 7870544 kB' 'KReclaimable: 245632 kB' 'Slab: 817804 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 572172 kB' 'KernelStack: 20752 kB' 'PageTables: 9916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9905564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.745 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.746 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173783492 kB' 'MemAvailable: 176661220 kB' 'Buffers: 3896 kB' 'Cached: 11766032 kB' 'SwapCached: 0 kB' 'Active: 8792880 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400872 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533040 kB' 'Mapped: 214800 kB' 'Shmem: 7870564 kB' 'KReclaimable: 245632 kB' 'Slab: 817804 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 572172 kB' 'KernelStack: 20496 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9904460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.747 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.748 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.749 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.750 nr_hugepages=1024 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.750 resv_hugepages=0 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.750 surplus_hugepages=0 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.750 anon_hugepages=0 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173782364 kB' 'MemAvailable: 176660092 kB' 'Buffers: 3896 kB' 'Cached: 11766072 kB' 'SwapCached: 0 kB' 'Active: 8791984 kB' 'Inactive: 3507356 kB' 'Active(anon): 8399976 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532656 kB' 'Mapped: 214776 kB' 'Shmem: 7870604 kB' 'KReclaimable: 245632 kB' 'Slab: 817784 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 572152 kB' 'KernelStack: 20416 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9904480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.750 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.751 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.752 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84825644 kB' 'MemUsed: 12837040 kB' 'SwapCached: 0 kB' 'Active: 5759036 kB' 'Inactive: 3337684 kB' 'Active(anon): 5601496 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8892248 kB' 'Mapped: 79500 kB' 'AnonPages: 207644 kB' 'Shmem: 5397024 kB' 'KernelStack: 11912 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130064 kB' 'Slab: 401544 kB' 'SReclaimable: 130064 kB' 'SUnreclaim: 271480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.753 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.754 node0=1024 expecting 1024 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.754 11:54:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:29.056 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.056 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.056 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.056 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173826484 kB' 'MemAvailable: 176704212 kB' 'Buffers: 3896 kB' 'Cached: 11766136 kB' 'SwapCached: 0 kB' 'Active: 8792408 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400400 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532940 kB' 'Mapped: 214848 kB' 'Shmem: 7870668 kB' 'KReclaimable: 245632 kB' 'Slab: 817448 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 571816 kB' 'KernelStack: 20416 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9904436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.056 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.057 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173826936 kB' 'MemAvailable: 176704664 kB' 'Buffers: 3896 kB' 'Cached: 11766140 kB' 'SwapCached: 0 kB' 'Active: 8792832 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400824 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533444 kB' 'Mapped: 214812 kB' 'Shmem: 7870672 kB' 'KReclaimable: 245632 kB' 'Slab: 817548 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 571916 kB' 'KernelStack: 20448 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9904456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.058 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173826924 kB' 'MemAvailable: 176704652 kB' 'Buffers: 3896 kB' 'Cached: 11766156 kB' 'SwapCached: 0 kB' 'Active: 8792604 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400596 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533168 kB' 'Mapped: 214812 kB' 'Shmem: 7870688 kB' 'KReclaimable: 245632 kB' 'Slab: 817548 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 571916 kB' 'KernelStack: 20416 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9904612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.059 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.060 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.061 nr_hugepages=1024 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.061 resv_hugepages=0 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.061 surplus_hugepages=0 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.061 anon_hugepages=0 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173827272 kB' 'MemAvailable: 176705000 kB' 'Buffers: 3896 kB' 'Cached: 11766180 kB' 'SwapCached: 0 kB' 'Active: 8792628 kB' 'Inactive: 3507356 kB' 'Active(anon): 8400620 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507356 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533176 kB' 'Mapped: 214812 kB' 'Shmem: 7870712 kB' 'KReclaimable: 245632 kB' 'Slab: 817548 kB' 'SReclaimable: 245632 kB' 'SUnreclaim: 571916 kB' 'KernelStack: 20416 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 9904640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 74496 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2833364 kB' 'DirectMap2M: 14671872 kB' 'DirectMap1G: 184549376 kB' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.061 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.062 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84856420 kB' 'MemUsed: 12806264 kB' 'SwapCached: 0 kB' 'Active: 5759232 kB' 'Inactive: 3337684 kB' 'Active(anon): 5601692 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3337684 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8892348 kB' 'Mapped: 79536 kB' 'AnonPages: 207780 kB' 'Shmem: 5397124 kB' 'KernelStack: 11928 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130064 kB' 'Slab: 401496 kB' 'SReclaimable: 130064 kB' 'SUnreclaim: 271432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.063 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.064 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.065 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.065 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.065 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.065 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.065 node0=1024 expecting 1024 00:04:29.065 11:54:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.065 00:04:29.065 real 0m6.025s 00:04:29.065 user 0m2.476s 00:04:29.065 sys 0m3.685s 00:04:29.065 11:54:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.065 11:54:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.065 ************************************ 00:04:29.065 END TEST no_shrink_alloc 00:04:29.065 ************************************ 00:04:29.065 11:54:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:29.065 11:54:18 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:29.065 00:04:29.065 real 0m22.870s 00:04:29.065 user 0m8.965s 00:04:29.065 sys 0m13.631s 00:04:29.065 11:54:18 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.065 11:54:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.065 ************************************ 00:04:29.065 END TEST hugepages 00:04:29.065 ************************************ 00:04:29.065 11:54:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:29.065 11:54:18 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:29.065 11:54:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.065 11:54:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.065 11:54:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.065 ************************************ 00:04:29.065 START TEST driver 00:04:29.065 ************************************ 00:04:29.065 11:54:18 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:29.065 * Looking for test storage... 00:04:29.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.065 11:54:18 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:29.065 11:54:18 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.065 11:54:18 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.263 11:54:23 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:33.263 11:54:23 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.263 11:54:23 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.263 11:54:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:33.263 ************************************ 00:04:33.263 START TEST guess_driver 00:04:33.263 ************************************ 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:33.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:33.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:33.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:33.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:33.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:33.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:33.263 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:33.263 Looking for driver=vfio-pci 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.263 11:54:23 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:25 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.549 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.549 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.549 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.808 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.808 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.808 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.067 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:37.067 11:54:26 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:37.067 11:54:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.067 11:54:26 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.276 00:04:41.276 real 0m7.982s 00:04:41.276 user 0m2.333s 00:04:41.276 sys 0m4.097s 00:04:41.276 11:54:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.276 11:54:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:41.276 ************************************ 00:04:41.276 END TEST guess_driver 00:04:41.276 ************************************ 00:04:41.276 11:54:31 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:41.276 00:04:41.277 real 0m12.254s 00:04:41.277 user 0m3.573s 00:04:41.277 sys 0m6.320s 00:04:41.277 11:54:31 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.277 11:54:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:41.277 ************************************ 00:04:41.277 END TEST driver 00:04:41.277 ************************************ 00:04:41.277 11:54:31 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:41.277 11:54:31 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:41.277 11:54:31 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.277 11:54:31 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.277 11:54:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:41.277 ************************************ 00:04:41.277 START TEST devices 00:04:41.277 ************************************ 00:04:41.277 11:54:31 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:41.277 * Looking for test storage... 00:04:41.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:41.277 11:54:31 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:41.277 11:54:31 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:41.277 11:54:31 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.277 11:54:31 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:44.568 11:54:34 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:44.568 11:54:34 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:44.568 No valid GPT data, bailing 00:04:44.568 11:54:34 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:44.568 11:54:34 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:44.568 11:54:34 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:44.568 11:54:34 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:44.568 11:54:34 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:44.568 11:54:34 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:44.568 11:54:34 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.568 11:54:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:44.568 ************************************ 00:04:44.568 START TEST nvme_mount 00:04:44.568 ************************************ 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:44.568 11:54:34 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:45.530 Creating new GPT entries in memory. 00:04:45.530 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:45.530 other utilities. 00:04:45.530 11:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:45.530 11:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.530 11:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.530 11:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.530 11:54:35 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:46.910 Creating new GPT entries in memory. 00:04:46.910 The operation has completed successfully. 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 927829 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.910 11:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.523 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:49.523 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.524 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.524 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.784 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:49.784 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:49.784 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.784 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:49.784 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:49.784 11:54:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:49.784 11:54:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.784 11:54:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:49.784 11:54:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.043 11:54:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.583 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.843 11:54:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:55.386 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.645 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.645 00:04:55.645 real 0m11.039s 00:04:55.645 user 0m3.324s 00:04:55.645 sys 0m5.557s 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.645 11:54:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:55.645 ************************************ 00:04:55.645 END TEST nvme_mount 00:04:55.645 ************************************ 00:04:55.645 11:54:45 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:55.645 11:54:45 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:55.645 11:54:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.645 11:54:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.645 11:54:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:55.645 ************************************ 00:04:55.645 START TEST dm_mount 00:04:55.645 ************************************ 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.645 11:54:45 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:57.017 Creating new GPT entries in memory. 00:04:57.017 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:57.017 other utilities. 00:04:57.017 11:54:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:57.017 11:54:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.017 11:54:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.017 11:54:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.017 11:54:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:57.952 Creating new GPT entries in memory. 00:04:57.952 The operation has completed successfully. 00:04:57.952 11:54:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.952 11:54:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.952 11:54:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.952 11:54:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.952 11:54:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:58.887 The operation has completed successfully. 00:04:58.887 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:58.887 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.887 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 931988 00:04:58.887 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.888 11:54:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.176 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.177 11:54:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.713 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:04.714 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:04.714 00:05:04.714 real 0m8.921s 00:05:04.714 user 0m2.241s 00:05:04.714 sys 0m3.721s 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.714 11:54:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:04.714 ************************************ 00:05:04.714 END TEST dm_mount 00:05:04.714 ************************************ 00:05:04.714 11:54:54 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:04.714 11:54:54 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:04.714 11:54:54 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:04.714 11:54:54 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.714 11:54:54 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.714 11:54:54 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.714 11:54:54 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.714 11:54:54 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.973 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:04.973 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:04.973 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.973 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.973 11:54:54 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:04.973 11:54:54 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.973 11:54:54 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.973 11:54:54 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.973 11:54:54 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.973 11:54:54 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.973 11:54:54 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:04.973 00:05:04.973 real 0m23.687s 00:05:04.973 user 0m6.928s 00:05:04.973 sys 0m11.519s 00:05:04.973 11:54:54 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.973 11:54:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:04.973 ************************************ 00:05:04.973 END TEST devices 00:05:04.973 ************************************ 00:05:04.973 11:54:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:04.973 00:05:04.973 real 1m19.735s 00:05:04.973 user 0m26.563s 00:05:04.973 sys 0m43.954s 00:05:04.973 11:54:54 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.973 11:54:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:04.973 ************************************ 00:05:04.973 END TEST setup.sh 00:05:04.973 ************************************ 00:05:04.973 11:54:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:04.973 11:54:54 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:08.262 Hugepages 00:05:08.262 node hugesize free / total 00:05:08.262 node0 1048576kB 0 / 0 00:05:08.262 node0 2048kB 2048 / 2048 00:05:08.262 node1 1048576kB 0 / 0 00:05:08.262 node1 2048kB 0 / 0 00:05:08.262 00:05:08.262 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:08.262 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:08.262 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:08.262 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:08.262 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:08.262 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:08.262 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:08.262 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:08.262 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:08.262 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:08.262 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:08.262 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:08.262 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:08.262 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:08.262 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:08.262 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:08.262 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:08.262 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:08.262 11:54:57 -- spdk/autotest.sh@130 -- # uname -s 00:05:08.262 11:54:57 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:08.262 11:54:57 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:08.262 11:54:57 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:10.799 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.799 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.738 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.738 11:55:01 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:12.673 11:55:02 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:12.673 11:55:02 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:12.673 11:55:02 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:12.673 11:55:02 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:12.673 11:55:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:12.673 11:55:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:12.673 11:55:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.673 11:55:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:12.673 11:55:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:12.932 11:55:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:12.932 11:55:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:12.932 11:55:02 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:15.504 Waiting for block devices as requested 00:05:15.504 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:15.763 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:15.763 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:15.763 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:16.022 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:16.022 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:16.022 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:16.281 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:16.281 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:16.281 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:16.540 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:16.540 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:16.540 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:16.540 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:16.799 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:16.799 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:16.799 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:17.058 11:55:06 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:17.058 11:55:06 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:17.058 11:55:06 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:17.058 11:55:06 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:05:17.058 11:55:06 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:17.058 11:55:06 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:17.058 11:55:06 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:17.058 11:55:06 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:17.058 11:55:06 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:17.058 11:55:06 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:17.058 11:55:06 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:17.058 11:55:06 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:17.058 11:55:06 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:17.058 11:55:06 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:17.058 11:55:06 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:17.058 11:55:06 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:17.058 11:55:06 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:17.058 11:55:06 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:17.058 11:55:06 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:17.058 11:55:06 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:17.058 11:55:06 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:17.058 11:55:06 -- common/autotest_common.sh@1557 -- # continue 00:05:17.058 11:55:06 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:17.058 11:55:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:17.058 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:05:17.058 11:55:06 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:17.058 11:55:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:17.058 11:55:06 -- common/autotest_common.sh@10 -- # set +x 00:05:17.058 11:55:06 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:20.351 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:20.351 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:20.610 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:20.869 11:55:10 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:20.869 11:55:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.869 11:55:10 -- common/autotest_common.sh@10 -- # set +x 00:05:20.869 11:55:10 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:20.869 11:55:10 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:20.869 11:55:10 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:20.869 11:55:10 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:20.869 11:55:10 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:20.869 11:55:10 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:20.870 11:55:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:20.870 11:55:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:20.870 11:55:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.870 11:55:10 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:20.870 11:55:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:20.870 11:55:10 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:20.870 11:55:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:20.870 11:55:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:20.870 11:55:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:20.870 11:55:10 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:20.870 11:55:10 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:20.870 11:55:10 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:20.870 11:55:10 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:05:20.870 11:55:10 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:05:20.870 11:55:10 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=940822 00:05:20.870 11:55:10 -- common/autotest_common.sh@1598 -- # waitforlisten 940822 00:05:20.870 11:55:10 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.870 11:55:10 -- common/autotest_common.sh@829 -- # '[' -z 940822 ']' 00:05:20.870 11:55:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.870 11:55:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.870 11:55:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.870 11:55:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.870 11:55:10 -- common/autotest_common.sh@10 -- # set +x 00:05:21.128 [2024-07-15 11:55:10.895204] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:05:21.129 [2024-07-15 11:55:10.895262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940822 ] 00:05:21.129 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.129 [2024-07-15 11:55:10.965258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.129 [2024-07-15 11:55:11.006744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.696 11:55:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.696 11:55:11 -- common/autotest_common.sh@862 -- # return 0 00:05:21.696 11:55:11 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:21.696 11:55:11 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:21.696 11:55:11 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:24.983 nvme0n1 00:05:24.983 11:55:14 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:24.983 [2024-07-15 11:55:14.832343] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:24.983 request: 00:05:24.983 { 00:05:24.983 "nvme_ctrlr_name": "nvme0", 00:05:24.983 "password": "test", 00:05:24.983 "method": "bdev_nvme_opal_revert", 00:05:24.983 "req_id": 1 00:05:24.983 } 00:05:24.983 Got JSON-RPC error response 00:05:24.983 response: 00:05:24.983 { 00:05:24.983 "code": -32602, 00:05:24.983 "message": "Invalid parameters" 00:05:24.983 } 00:05:24.983 11:55:14 -- common/autotest_common.sh@1604 -- # true 00:05:24.983 11:55:14 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:24.983 11:55:14 -- common/autotest_common.sh@1608 -- # killprocess 940822 00:05:24.983 11:55:14 -- common/autotest_common.sh@948 -- # '[' -z 940822 ']' 00:05:24.983 11:55:14 -- common/autotest_common.sh@952 -- # kill -0 940822 00:05:24.983 11:55:14 -- common/autotest_common.sh@953 -- # uname 00:05:24.983 11:55:14 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.983 11:55:14 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940822 00:05:24.983 11:55:14 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.983 11:55:14 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.983 11:55:14 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940822' 00:05:24.983 killing process with pid 940822 00:05:24.983 11:55:14 -- common/autotest_common.sh@967 -- # kill 940822 00:05:24.983 11:55:14 -- common/autotest_common.sh@972 -- # wait 940822 00:05:26.889 11:55:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:26.889 11:55:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:26.889 11:55:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:26.889 11:55:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:26.889 11:55:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:26.889 11:55:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:26.889 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:05:26.889 11:55:16 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:26.889 11:55:16 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:26.889 11:55:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.889 11:55:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.889 11:55:16 -- common/autotest_common.sh@10 -- # set +x 00:05:26.889 ************************************ 00:05:26.889 START TEST env 00:05:26.889 ************************************ 00:05:26.889 11:55:16 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:26.889 * Looking for test storage... 00:05:26.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:26.889 11:55:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:26.889 11:55:16 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.889 11:55:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.889 11:55:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.889 ************************************ 00:05:26.889 START TEST env_memory 00:05:26.889 ************************************ 00:05:26.889 11:55:16 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:26.889 00:05:26.889 00:05:26.889 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.889 http://cunit.sourceforge.net/ 00:05:26.889 00:05:26.889 00:05:26.889 Suite: memory 00:05:26.889 Test: alloc and free memory map ...[2024-07-15 11:55:16.660665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:26.889 passed 00:05:26.889 Test: mem map translation ...[2024-07-15 11:55:16.678551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:26.889 [2024-07-15 11:55:16.678566] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:26.889 [2024-07-15 11:55:16.678600] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:26.889 [2024-07-15 11:55:16.678607] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:26.889 passed 00:05:26.889 Test: mem map registration ...[2024-07-15 11:55:16.715228] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:26.889 [2024-07-15 11:55:16.715244] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:26.889 passed 00:05:26.889 Test: mem map adjacent registrations ...passed 00:05:26.889 00:05:26.889 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.889 suites 1 1 n/a 0 0 00:05:26.889 tests 4 4 4 0 0 00:05:26.889 asserts 152 152 152 0 n/a 00:05:26.889 00:05:26.889 Elapsed time = 0.134 seconds 00:05:26.889 00:05:26.889 real 0m0.147s 00:05:26.889 user 0m0.139s 00:05:26.889 sys 0m0.007s 00:05:26.889 11:55:16 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.889 11:55:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:26.889 ************************************ 00:05:26.889 END TEST env_memory 00:05:26.889 ************************************ 00:05:26.889 11:55:16 env -- common/autotest_common.sh@1142 -- # return 0 00:05:26.890 11:55:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:26.890 11:55:16 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.890 11:55:16 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.890 11:55:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.890 ************************************ 00:05:26.890 START TEST env_vtophys 00:05:26.890 ************************************ 00:05:26.890 11:55:16 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:26.890 EAL: lib.eal log level changed from notice to debug 00:05:26.890 EAL: Detected lcore 0 as core 0 on socket 0 00:05:26.890 EAL: Detected lcore 1 as core 1 on socket 0 00:05:26.890 EAL: Detected lcore 2 as core 2 on socket 0 00:05:26.890 EAL: Detected lcore 3 as core 3 on socket 0 00:05:26.890 EAL: Detected lcore 4 as core 4 on socket 0 00:05:26.890 EAL: Detected lcore 5 as core 5 on socket 0 00:05:26.890 EAL: Detected lcore 6 as core 6 on socket 0 00:05:26.890 EAL: Detected lcore 7 as core 8 on socket 0 00:05:26.890 EAL: Detected lcore 8 as core 9 on socket 0 00:05:26.890 EAL: Detected lcore 9 as core 10 on socket 0 00:05:26.890 EAL: Detected lcore 10 as core 11 on socket 0 00:05:26.890 EAL: Detected lcore 11 as core 12 on socket 0 00:05:26.890 EAL: Detected lcore 12 as core 13 on socket 0 00:05:26.890 EAL: Detected lcore 13 as core 16 on socket 0 00:05:26.890 EAL: Detected lcore 14 as core 17 on socket 0 00:05:26.890 EAL: Detected lcore 15 as core 18 on socket 0 00:05:26.890 EAL: Detected lcore 16 as core 19 on socket 0 00:05:26.890 EAL: Detected lcore 17 as core 20 on socket 0 00:05:26.890 EAL: Detected lcore 18 as core 21 on socket 0 00:05:26.890 EAL: Detected lcore 19 as core 25 on socket 0 00:05:26.890 EAL: Detected lcore 20 as core 26 on socket 0 00:05:26.890 EAL: Detected lcore 21 as core 27 on socket 0 00:05:26.890 EAL: Detected lcore 22 as core 28 on socket 0 00:05:26.890 EAL: Detected lcore 23 as core 29 on socket 0 00:05:26.890 EAL: Detected lcore 24 as core 0 on socket 1 00:05:26.890 EAL: Detected lcore 25 as core 1 on socket 1 00:05:26.890 EAL: Detected lcore 26 as core 2 on socket 1 00:05:26.890 EAL: Detected lcore 27 as core 3 on socket 1 00:05:26.890 EAL: Detected lcore 28 as core 4 on socket 1 00:05:26.890 EAL: Detected lcore 29 as core 5 on socket 1 00:05:26.890 EAL: Detected lcore 30 as core 6 on socket 1 00:05:26.890 EAL: Detected lcore 31 as core 9 on socket 1 00:05:26.890 EAL: Detected lcore 32 as core 10 on socket 1 00:05:26.890 EAL: Detected lcore 33 as core 11 on socket 1 00:05:26.890 EAL: Detected lcore 34 as core 12 on socket 1 00:05:26.890 EAL: Detected lcore 35 as core 13 on socket 1 00:05:26.890 EAL: Detected lcore 36 as core 16 on socket 1 00:05:26.890 EAL: Detected lcore 37 as core 17 on socket 1 00:05:26.890 EAL: Detected lcore 38 as core 18 on socket 1 00:05:26.890 EAL: Detected lcore 39 as core 19 on socket 1 00:05:26.890 EAL: Detected lcore 40 as core 20 on socket 1 00:05:26.890 EAL: Detected lcore 41 as core 21 on socket 1 00:05:26.890 EAL: Detected lcore 42 as core 24 on socket 1 00:05:26.890 EAL: Detected lcore 43 as core 25 on socket 1 00:05:26.890 EAL: Detected lcore 44 as core 26 on socket 1 00:05:26.890 EAL: Detected lcore 45 as core 27 on socket 1 00:05:26.890 EAL: Detected lcore 46 as core 28 on socket 1 00:05:26.890 EAL: Detected lcore 47 as core 29 on socket 1 00:05:26.890 EAL: Detected lcore 48 as core 0 on socket 0 00:05:26.890 EAL: Detected lcore 49 as core 1 on socket 0 00:05:26.890 EAL: Detected lcore 50 as core 2 on socket 0 00:05:26.890 EAL: Detected lcore 51 as core 3 on socket 0 00:05:26.890 EAL: Detected lcore 52 as core 4 on socket 0 00:05:26.890 EAL: Detected lcore 53 as core 5 on socket 0 00:05:26.890 EAL: Detected lcore 54 as core 6 on socket 0 00:05:26.890 EAL: Detected lcore 55 as core 8 on socket 0 00:05:26.890 EAL: Detected lcore 56 as core 9 on socket 0 00:05:26.890 EAL: Detected lcore 57 as core 10 on socket 0 00:05:26.890 EAL: Detected lcore 58 as core 11 on socket 0 00:05:26.890 EAL: Detected lcore 59 as core 12 on socket 0 00:05:26.890 EAL: Detected lcore 60 as core 13 on socket 0 00:05:26.890 EAL: Detected lcore 61 as core 16 on socket 0 00:05:26.890 EAL: Detected lcore 62 as core 17 on socket 0 00:05:26.890 EAL: Detected lcore 63 as core 18 on socket 0 00:05:26.890 EAL: Detected lcore 64 as core 19 on socket 0 00:05:26.890 EAL: Detected lcore 65 as core 20 on socket 0 00:05:26.890 EAL: Detected lcore 66 as core 21 on socket 0 00:05:26.890 EAL: Detected lcore 67 as core 25 on socket 0 00:05:26.890 EAL: Detected lcore 68 as core 26 on socket 0 00:05:26.890 EAL: Detected lcore 69 as core 27 on socket 0 00:05:26.890 EAL: Detected lcore 70 as core 28 on socket 0 00:05:26.890 EAL: Detected lcore 71 as core 29 on socket 0 00:05:26.890 EAL: Detected lcore 72 as core 0 on socket 1 00:05:26.890 EAL: Detected lcore 73 as core 1 on socket 1 00:05:26.890 EAL: Detected lcore 74 as core 2 on socket 1 00:05:26.890 EAL: Detected lcore 75 as core 3 on socket 1 00:05:26.890 EAL: Detected lcore 76 as core 4 on socket 1 00:05:26.890 EAL: Detected lcore 77 as core 5 on socket 1 00:05:26.890 EAL: Detected lcore 78 as core 6 on socket 1 00:05:26.890 EAL: Detected lcore 79 as core 9 on socket 1 00:05:26.890 EAL: Detected lcore 80 as core 10 on socket 1 00:05:26.890 EAL: Detected lcore 81 as core 11 on socket 1 00:05:26.890 EAL: Detected lcore 82 as core 12 on socket 1 00:05:26.890 EAL: Detected lcore 83 as core 13 on socket 1 00:05:26.890 EAL: Detected lcore 84 as core 16 on socket 1 00:05:26.890 EAL: Detected lcore 85 as core 17 on socket 1 00:05:26.890 EAL: Detected lcore 86 as core 18 on socket 1 00:05:26.890 EAL: Detected lcore 87 as core 19 on socket 1 00:05:26.890 EAL: Detected lcore 88 as core 20 on socket 1 00:05:26.890 EAL: Detected lcore 89 as core 21 on socket 1 00:05:26.890 EAL: Detected lcore 90 as core 24 on socket 1 00:05:26.890 EAL: Detected lcore 91 as core 25 on socket 1 00:05:26.890 EAL: Detected lcore 92 as core 26 on socket 1 00:05:26.890 EAL: Detected lcore 93 as core 27 on socket 1 00:05:26.890 EAL: Detected lcore 94 as core 28 on socket 1 00:05:26.890 EAL: Detected lcore 95 as core 29 on socket 1 00:05:26.890 EAL: Maximum logical cores by configuration: 128 00:05:26.890 EAL: Detected CPU lcores: 96 00:05:26.890 EAL: Detected NUMA nodes: 2 00:05:26.890 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:26.890 EAL: Detected shared linkage of DPDK 00:05:26.890 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:26.890 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:26.890 EAL: Registered [vdev] bus. 00:05:26.890 EAL: bus.vdev log level changed from disabled to notice 00:05:26.890 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:26.890 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:26.890 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:26.890 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:26.890 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:26.890 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:26.890 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:26.890 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:26.890 EAL: No shared files mode enabled, IPC will be disabled 00:05:26.890 EAL: No shared files mode enabled, IPC is disabled 00:05:26.890 EAL: Bus pci wants IOVA as 'DC' 00:05:26.890 EAL: Bus vdev wants IOVA as 'DC' 00:05:26.890 EAL: Buses did not request a specific IOVA mode. 00:05:26.890 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:26.890 EAL: Selected IOVA mode 'VA' 00:05:26.890 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.890 EAL: Probing VFIO support... 00:05:26.890 EAL: IOMMU type 1 (Type 1) is supported 00:05:26.890 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:26.890 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:26.890 EAL: VFIO support initialized 00:05:26.890 EAL: Ask a virtual area of 0x2e000 bytes 00:05:26.890 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:26.890 EAL: Setting up physically contiguous memory... 00:05:26.890 EAL: Setting maximum number of open files to 524288 00:05:26.890 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:26.890 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:26.890 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:26.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.890 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:26.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.890 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:26.890 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:26.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.890 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:26.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.890 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:26.890 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:26.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.890 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:26.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.890 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:26.890 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:26.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.890 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:26.890 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:26.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.890 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:26.890 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:26.890 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:26.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.890 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:26.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.890 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:26.890 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:26.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.890 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:26.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.890 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:26.890 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:26.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.890 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:26.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.890 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.890 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:26.890 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:26.890 EAL: Ask a virtual area of 0x61000 bytes 00:05:26.890 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:26.890 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:26.891 EAL: Ask a virtual area of 0x400000000 bytes 00:05:26.891 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:26.891 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:26.891 EAL: Hugepages will be freed exactly as allocated. 00:05:26.891 EAL: No shared files mode enabled, IPC is disabled 00:05:26.891 EAL: No shared files mode enabled, IPC is disabled 00:05:26.891 EAL: TSC frequency is ~2300000 KHz 00:05:26.891 EAL: Main lcore 0 is ready (tid=7fddd7c01a00;cpuset=[0]) 00:05:26.891 EAL: Trying to obtain current memory policy. 00:05:26.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.891 EAL: Restoring previous memory policy: 0 00:05:26.891 EAL: request: mp_malloc_sync 00:05:26.891 EAL: No shared files mode enabled, IPC is disabled 00:05:26.891 EAL: Heap on socket 0 was expanded by 2MB 00:05:26.891 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:05:26.891 EAL: probe driver: 8086:37d2 net_i40e 00:05:26.891 EAL: Not managed by a supported kernel driver, skipped 00:05:26.891 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:05:26.891 EAL: probe driver: 8086:37d2 net_i40e 00:05:26.891 EAL: Not managed by a supported kernel driver, skipped 00:05:26.891 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:27.150 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.150 00:05:27.150 00:05:27.150 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.150 http://cunit.sourceforge.net/ 00:05:27.150 00:05:27.150 00:05:27.150 Suite: components_suite 00:05:27.150 Test: vtophys_malloc_test ...passed 00:05:27.150 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.150 EAL: Restoring previous memory policy: 4 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.150 EAL: Trying to obtain current memory policy. 00:05:27.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.150 EAL: Restoring previous memory policy: 4 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.150 EAL: Trying to obtain current memory policy. 00:05:27.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.150 EAL: Restoring previous memory policy: 4 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.150 EAL: Trying to obtain current memory policy. 00:05:27.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.150 EAL: Restoring previous memory policy: 4 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.150 EAL: Trying to obtain current memory policy. 00:05:27.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.150 EAL: Restoring previous memory policy: 4 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was expanded by 34MB 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was shrunk by 34MB 00:05:27.150 EAL: Trying to obtain current memory policy. 00:05:27.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.150 EAL: Restoring previous memory policy: 4 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was expanded by 66MB 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was shrunk by 66MB 00:05:27.150 EAL: Trying to obtain current memory policy. 00:05:27.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.150 EAL: Restoring previous memory policy: 4 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was expanded by 130MB 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was shrunk by 130MB 00:05:27.150 EAL: Trying to obtain current memory policy. 00:05:27.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.150 EAL: Restoring previous memory policy: 4 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was expanded by 258MB 00:05:27.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.150 EAL: request: mp_malloc_sync 00:05:27.150 EAL: No shared files mode enabled, IPC is disabled 00:05:27.150 EAL: Heap on socket 0 was shrunk by 258MB 00:05:27.150 EAL: Trying to obtain current memory policy. 00:05:27.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.409 EAL: Restoring previous memory policy: 4 00:05:27.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.409 EAL: request: mp_malloc_sync 00:05:27.409 EAL: No shared files mode enabled, IPC is disabled 00:05:27.409 EAL: Heap on socket 0 was expanded by 514MB 00:05:27.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.409 EAL: request: mp_malloc_sync 00:05:27.409 EAL: No shared files mode enabled, IPC is disabled 00:05:27.409 EAL: Heap on socket 0 was shrunk by 514MB 00:05:27.409 EAL: Trying to obtain current memory policy. 00:05:27.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.681 EAL: Restoring previous memory policy: 4 00:05:27.681 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.681 EAL: request: mp_malloc_sync 00:05:27.681 EAL: No shared files mode enabled, IPC is disabled 00:05:27.681 EAL: Heap on socket 0 was expanded by 1026MB 00:05:27.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.940 EAL: request: mp_malloc_sync 00:05:27.940 EAL: No shared files mode enabled, IPC is disabled 00:05:27.940 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:27.940 passed 00:05:27.940 00:05:27.940 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.940 suites 1 1 n/a 0 0 00:05:27.940 tests 2 2 2 0 0 00:05:27.940 asserts 497 497 497 0 n/a 00:05:27.940 00:05:27.940 Elapsed time = 0.974 seconds 00:05:27.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.940 EAL: request: mp_malloc_sync 00:05:27.940 EAL: No shared files mode enabled, IPC is disabled 00:05:27.940 EAL: Heap on socket 0 was shrunk by 2MB 00:05:27.940 EAL: No shared files mode enabled, IPC is disabled 00:05:27.940 EAL: No shared files mode enabled, IPC is disabled 00:05:27.940 EAL: No shared files mode enabled, IPC is disabled 00:05:27.940 00:05:27.940 real 0m1.101s 00:05:27.940 user 0m0.639s 00:05:27.940 sys 0m0.429s 00:05:27.940 11:55:17 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.940 11:55:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:27.941 ************************************ 00:05:27.941 END TEST env_vtophys 00:05:27.941 ************************************ 00:05:28.201 11:55:17 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.201 11:55:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:28.201 11:55:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.201 11:55:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.201 11:55:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.201 ************************************ 00:05:28.201 START TEST env_pci 00:05:28.201 ************************************ 00:05:28.201 11:55:18 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:28.201 00:05:28.201 00:05:28.201 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.201 http://cunit.sourceforge.net/ 00:05:28.201 00:05:28.201 00:05:28.201 Suite: pci 00:05:28.201 Test: pci_hook ...[2024-07-15 11:55:18.020771] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 942132 has claimed it 00:05:28.201 EAL: Cannot find device (10000:00:01.0) 00:05:28.201 EAL: Failed to attach device on primary process 00:05:28.201 passed 00:05:28.201 00:05:28.201 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.201 suites 1 1 n/a 0 0 00:05:28.201 tests 1 1 1 0 0 00:05:28.201 asserts 25 25 25 0 n/a 00:05:28.201 00:05:28.201 Elapsed time = 0.026 seconds 00:05:28.201 00:05:28.201 real 0m0.045s 00:05:28.201 user 0m0.008s 00:05:28.201 sys 0m0.037s 00:05:28.201 11:55:18 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.201 11:55:18 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:28.201 ************************************ 00:05:28.201 END TEST env_pci 00:05:28.201 ************************************ 00:05:28.201 11:55:18 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.201 11:55:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:28.201 11:55:18 env -- env/env.sh@15 -- # uname 00:05:28.201 11:55:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:28.201 11:55:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:28.201 11:55:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.201 11:55:18 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:28.201 11:55:18 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.201 11:55:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.201 ************************************ 00:05:28.201 START TEST env_dpdk_post_init 00:05:28.201 ************************************ 00:05:28.201 11:55:18 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:28.201 EAL: Detected CPU lcores: 96 00:05:28.201 EAL: Detected NUMA nodes: 2 00:05:28.201 EAL: Detected shared linkage of DPDK 00:05:28.201 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.201 EAL: Selected IOVA mode 'VA' 00:05:28.201 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.201 EAL: VFIO support initialized 00:05:28.201 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.460 EAL: Using IOMMU type 1 (Type 1) 00:05:28.460 EAL: Ignore mapping IO port bar(1) 00:05:28.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:28.460 EAL: Ignore mapping IO port bar(1) 00:05:28.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:28.460 EAL: Ignore mapping IO port bar(1) 00:05:28.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:28.460 EAL: Ignore mapping IO port bar(1) 00:05:28.460 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:28.461 EAL: Ignore mapping IO port bar(1) 00:05:28.461 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:28.461 EAL: Ignore mapping IO port bar(1) 00:05:28.461 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:28.461 EAL: Ignore mapping IO port bar(1) 00:05:28.461 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:28.461 EAL: Ignore mapping IO port bar(1) 00:05:28.461 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:29.398 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:29.398 EAL: Ignore mapping IO port bar(1) 00:05:29.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:29.398 EAL: Ignore mapping IO port bar(1) 00:05:29.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:29.398 EAL: Ignore mapping IO port bar(1) 00:05:29.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:29.398 EAL: Ignore mapping IO port bar(1) 00:05:29.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:29.398 EAL: Ignore mapping IO port bar(1) 00:05:29.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:29.398 EAL: Ignore mapping IO port bar(1) 00:05:29.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:29.398 EAL: Ignore mapping IO port bar(1) 00:05:29.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:29.398 EAL: Ignore mapping IO port bar(1) 00:05:29.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:32.687 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:32.687 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:32.687 Starting DPDK initialization... 00:05:32.687 Starting SPDK post initialization... 00:05:32.687 SPDK NVMe probe 00:05:32.687 Attaching to 0000:5e:00.0 00:05:32.687 Attached to 0000:5e:00.0 00:05:32.687 Cleaning up... 00:05:32.687 00:05:32.687 real 0m4.345s 00:05:32.687 user 0m3.294s 00:05:32.687 sys 0m0.126s 00:05:32.687 11:55:22 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.687 11:55:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:32.687 ************************************ 00:05:32.687 END TEST env_dpdk_post_init 00:05:32.687 ************************************ 00:05:32.687 11:55:22 env -- common/autotest_common.sh@1142 -- # return 0 00:05:32.687 11:55:22 env -- env/env.sh@26 -- # uname 00:05:32.687 11:55:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:32.687 11:55:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.687 11:55:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.687 11:55:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.687 11:55:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.687 ************************************ 00:05:32.687 START TEST env_mem_callbacks 00:05:32.687 ************************************ 00:05:32.687 11:55:22 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.687 EAL: Detected CPU lcores: 96 00:05:32.687 EAL: Detected NUMA nodes: 2 00:05:32.687 EAL: Detected shared linkage of DPDK 00:05:32.687 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.687 EAL: Selected IOVA mode 'VA' 00:05:32.687 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.687 EAL: VFIO support initialized 00:05:32.687 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:32.687 00:05:32.687 00:05:32.687 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.687 http://cunit.sourceforge.net/ 00:05:32.687 00:05:32.687 00:05:32.687 Suite: memory 00:05:32.687 Test: test ... 00:05:32.687 register 0x200000200000 2097152 00:05:32.687 malloc 3145728 00:05:32.687 register 0x200000400000 4194304 00:05:32.687 buf 0x200000500000 len 3145728 PASSED 00:05:32.687 malloc 64 00:05:32.687 buf 0x2000004fff40 len 64 PASSED 00:05:32.687 malloc 4194304 00:05:32.687 register 0x200000800000 6291456 00:05:32.687 buf 0x200000a00000 len 4194304 PASSED 00:05:32.687 free 0x200000500000 3145728 00:05:32.687 free 0x2000004fff40 64 00:05:32.687 unregister 0x200000400000 4194304 PASSED 00:05:32.687 free 0x200000a00000 4194304 00:05:32.687 unregister 0x200000800000 6291456 PASSED 00:05:32.687 malloc 8388608 00:05:32.687 register 0x200000400000 10485760 00:05:32.687 buf 0x200000600000 len 8388608 PASSED 00:05:32.687 free 0x200000600000 8388608 00:05:32.687 unregister 0x200000400000 10485760 PASSED 00:05:32.687 passed 00:05:32.687 00:05:32.687 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.687 suites 1 1 n/a 0 0 00:05:32.687 tests 1 1 1 0 0 00:05:32.687 asserts 15 15 15 0 n/a 00:05:32.687 00:05:32.687 Elapsed time = 0.008 seconds 00:05:32.687 00:05:32.687 real 0m0.060s 00:05:32.687 user 0m0.018s 00:05:32.687 sys 0m0.041s 00:05:32.687 11:55:22 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.687 11:55:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:32.687 ************************************ 00:05:32.687 END TEST env_mem_callbacks 00:05:32.687 ************************************ 00:05:32.687 11:55:22 env -- common/autotest_common.sh@1142 -- # return 0 00:05:32.687 00:05:32.687 real 0m6.148s 00:05:32.687 user 0m4.275s 00:05:32.687 sys 0m0.944s 00:05:32.687 11:55:22 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.687 11:55:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.687 ************************************ 00:05:32.687 END TEST env 00:05:32.687 ************************************ 00:05:32.687 11:55:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:32.687 11:55:22 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:32.687 11:55:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.687 11:55:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.687 11:55:22 -- common/autotest_common.sh@10 -- # set +x 00:05:32.947 ************************************ 00:05:32.947 START TEST rpc 00:05:32.947 ************************************ 00:05:32.947 11:55:22 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:32.947 * Looking for test storage... 00:05:32.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.947 11:55:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=942952 00:05:32.947 11:55:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.947 11:55:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:32.947 11:55:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 942952 00:05:32.947 11:55:22 rpc -- common/autotest_common.sh@829 -- # '[' -z 942952 ']' 00:05:32.947 11:55:22 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.947 11:55:22 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.947 11:55:22 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.947 11:55:22 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.947 11:55:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.947 [2024-07-15 11:55:22.858443] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:05:32.947 [2024-07-15 11:55:22.858489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942952 ] 00:05:32.947 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.947 [2024-07-15 11:55:22.928844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.206 [2024-07-15 11:55:22.969651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:33.206 [2024-07-15 11:55:22.969691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 942952' to capture a snapshot of events at runtime. 00:05:33.206 [2024-07-15 11:55:22.969698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:33.206 [2024-07-15 11:55:22.969704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:33.206 [2024-07-15 11:55:22.969708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid942952 for offline analysis/debug. 00:05:33.206 [2024-07-15 11:55:22.969726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.206 11:55:23 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.206 11:55:23 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:33.206 11:55:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:33.206 11:55:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:33.206 11:55:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:33.206 11:55:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:33.206 11:55:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.206 11:55:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.206 11:55:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.206 ************************************ 00:05:33.206 START TEST rpc_integrity 00:05:33.206 ************************************ 00:05:33.206 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:33.206 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.206 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.206 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.206 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.206 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.206 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.465 { 00:05:33.465 "name": "Malloc0", 00:05:33.465 "aliases": [ 00:05:33.465 "c79d75c8-fcb1-45b7-afc7-2ca32d8726f1" 00:05:33.465 ], 00:05:33.465 "product_name": "Malloc disk", 00:05:33.465 "block_size": 512, 00:05:33.465 "num_blocks": 16384, 00:05:33.465 "uuid": "c79d75c8-fcb1-45b7-afc7-2ca32d8726f1", 00:05:33.465 "assigned_rate_limits": { 00:05:33.465 "rw_ios_per_sec": 0, 00:05:33.465 "rw_mbytes_per_sec": 0, 00:05:33.465 "r_mbytes_per_sec": 0, 00:05:33.465 "w_mbytes_per_sec": 0 00:05:33.465 }, 00:05:33.465 "claimed": false, 00:05:33.465 "zoned": false, 00:05:33.465 "supported_io_types": { 00:05:33.465 "read": true, 00:05:33.465 "write": true, 00:05:33.465 "unmap": true, 00:05:33.465 "flush": true, 00:05:33.465 "reset": true, 00:05:33.465 "nvme_admin": false, 00:05:33.465 "nvme_io": false, 00:05:33.465 "nvme_io_md": false, 00:05:33.465 "write_zeroes": true, 00:05:33.465 "zcopy": true, 00:05:33.465 "get_zone_info": false, 00:05:33.465 "zone_management": false, 00:05:33.465 "zone_append": false, 00:05:33.465 "compare": false, 00:05:33.465 "compare_and_write": false, 00:05:33.465 "abort": true, 00:05:33.465 "seek_hole": false, 00:05:33.465 "seek_data": false, 00:05:33.465 "copy": true, 00:05:33.465 "nvme_iov_md": false 00:05:33.465 }, 00:05:33.465 "memory_domains": [ 00:05:33.465 { 00:05:33.465 "dma_device_id": "system", 00:05:33.465 "dma_device_type": 1 00:05:33.465 }, 00:05:33.465 { 00:05:33.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.465 "dma_device_type": 2 00:05:33.465 } 00:05:33.465 ], 00:05:33.465 "driver_specific": {} 00:05:33.465 } 00:05:33.465 ]' 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 [2024-07-15 11:55:23.312569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:33.465 [2024-07-15 11:55:23.312597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.465 [2024-07-15 11:55:23.312611] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x68ec60 00:05:33.465 [2024-07-15 11:55:23.312617] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.465 [2024-07-15 11:55:23.313723] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.465 [2024-07-15 11:55:23.313743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.465 Passthru0 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.465 { 00:05:33.465 "name": "Malloc0", 00:05:33.465 "aliases": [ 00:05:33.465 "c79d75c8-fcb1-45b7-afc7-2ca32d8726f1" 00:05:33.465 ], 00:05:33.465 "product_name": "Malloc disk", 00:05:33.465 "block_size": 512, 00:05:33.465 "num_blocks": 16384, 00:05:33.465 "uuid": "c79d75c8-fcb1-45b7-afc7-2ca32d8726f1", 00:05:33.465 "assigned_rate_limits": { 00:05:33.465 "rw_ios_per_sec": 0, 00:05:33.465 "rw_mbytes_per_sec": 0, 00:05:33.465 "r_mbytes_per_sec": 0, 00:05:33.465 "w_mbytes_per_sec": 0 00:05:33.465 }, 00:05:33.465 "claimed": true, 00:05:33.465 "claim_type": "exclusive_write", 00:05:33.465 "zoned": false, 00:05:33.465 "supported_io_types": { 00:05:33.465 "read": true, 00:05:33.465 "write": true, 00:05:33.465 "unmap": true, 00:05:33.465 "flush": true, 00:05:33.465 "reset": true, 00:05:33.465 "nvme_admin": false, 00:05:33.465 "nvme_io": false, 00:05:33.465 "nvme_io_md": false, 00:05:33.465 "write_zeroes": true, 00:05:33.465 "zcopy": true, 00:05:33.465 "get_zone_info": false, 00:05:33.465 "zone_management": false, 00:05:33.465 "zone_append": false, 00:05:33.465 "compare": false, 00:05:33.465 "compare_and_write": false, 00:05:33.465 "abort": true, 00:05:33.465 "seek_hole": false, 00:05:33.465 "seek_data": false, 00:05:33.465 "copy": true, 00:05:33.465 "nvme_iov_md": false 00:05:33.465 }, 00:05:33.465 "memory_domains": [ 00:05:33.465 { 00:05:33.465 "dma_device_id": "system", 00:05:33.465 "dma_device_type": 1 00:05:33.465 }, 00:05:33.465 { 00:05:33.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.465 "dma_device_type": 2 00:05:33.465 } 00:05:33.465 ], 00:05:33.465 "driver_specific": {} 00:05:33.465 }, 00:05:33.465 { 00:05:33.465 "name": "Passthru0", 00:05:33.465 "aliases": [ 00:05:33.465 "ff5eb407-fbb8-5a85-ab06-62d2db5462b2" 00:05:33.465 ], 00:05:33.465 "product_name": "passthru", 00:05:33.465 "block_size": 512, 00:05:33.465 "num_blocks": 16384, 00:05:33.465 "uuid": "ff5eb407-fbb8-5a85-ab06-62d2db5462b2", 00:05:33.465 "assigned_rate_limits": { 00:05:33.465 "rw_ios_per_sec": 0, 00:05:33.465 "rw_mbytes_per_sec": 0, 00:05:33.465 "r_mbytes_per_sec": 0, 00:05:33.465 "w_mbytes_per_sec": 0 00:05:33.465 }, 00:05:33.465 "claimed": false, 00:05:33.465 "zoned": false, 00:05:33.465 "supported_io_types": { 00:05:33.465 "read": true, 00:05:33.465 "write": true, 00:05:33.465 "unmap": true, 00:05:33.465 "flush": true, 00:05:33.465 "reset": true, 00:05:33.465 "nvme_admin": false, 00:05:33.465 "nvme_io": false, 00:05:33.465 "nvme_io_md": false, 00:05:33.465 "write_zeroes": true, 00:05:33.465 "zcopy": true, 00:05:33.465 "get_zone_info": false, 00:05:33.465 "zone_management": false, 00:05:33.465 "zone_append": false, 00:05:33.465 "compare": false, 00:05:33.465 "compare_and_write": false, 00:05:33.465 "abort": true, 00:05:33.465 "seek_hole": false, 00:05:33.465 "seek_data": false, 00:05:33.465 "copy": true, 00:05:33.465 "nvme_iov_md": false 00:05:33.465 }, 00:05:33.465 "memory_domains": [ 00:05:33.465 { 00:05:33.465 "dma_device_id": "system", 00:05:33.465 "dma_device_type": 1 00:05:33.465 }, 00:05:33.465 { 00:05:33.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.465 "dma_device_type": 2 00:05:33.465 } 00:05:33.465 ], 00:05:33.465 "driver_specific": { 00:05:33.465 "passthru": { 00:05:33.465 "name": "Passthru0", 00:05:33.465 "base_bdev_name": "Malloc0" 00:05:33.465 } 00:05:33.465 } 00:05:33.465 } 00:05:33.465 ]' 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.465 11:55:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.465 00:05:33.465 real 0m0.275s 00:05:33.465 user 0m0.176s 00:05:33.465 sys 0m0.032s 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.465 11:55:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.465 ************************************ 00:05:33.465 END TEST rpc_integrity 00:05:33.465 ************************************ 00:05:33.724 11:55:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:33.724 11:55:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:33.724 11:55:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.724 11:55:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.724 11:55:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 ************************************ 00:05:33.724 START TEST rpc_plugins 00:05:33.724 ************************************ 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:33.724 { 00:05:33.724 "name": "Malloc1", 00:05:33.724 "aliases": [ 00:05:33.724 "98fa1666-6d9a-4edb-aa70-678948498c07" 00:05:33.724 ], 00:05:33.724 "product_name": "Malloc disk", 00:05:33.724 "block_size": 4096, 00:05:33.724 "num_blocks": 256, 00:05:33.724 "uuid": "98fa1666-6d9a-4edb-aa70-678948498c07", 00:05:33.724 "assigned_rate_limits": { 00:05:33.724 "rw_ios_per_sec": 0, 00:05:33.724 "rw_mbytes_per_sec": 0, 00:05:33.724 "r_mbytes_per_sec": 0, 00:05:33.724 "w_mbytes_per_sec": 0 00:05:33.724 }, 00:05:33.724 "claimed": false, 00:05:33.724 "zoned": false, 00:05:33.724 "supported_io_types": { 00:05:33.724 "read": true, 00:05:33.724 "write": true, 00:05:33.724 "unmap": true, 00:05:33.724 "flush": true, 00:05:33.724 "reset": true, 00:05:33.724 "nvme_admin": false, 00:05:33.724 "nvme_io": false, 00:05:33.724 "nvme_io_md": false, 00:05:33.724 "write_zeroes": true, 00:05:33.724 "zcopy": true, 00:05:33.724 "get_zone_info": false, 00:05:33.724 "zone_management": false, 00:05:33.724 "zone_append": false, 00:05:33.724 "compare": false, 00:05:33.724 "compare_and_write": false, 00:05:33.724 "abort": true, 00:05:33.724 "seek_hole": false, 00:05:33.724 "seek_data": false, 00:05:33.724 "copy": true, 00:05:33.724 "nvme_iov_md": false 00:05:33.724 }, 00:05:33.724 "memory_domains": [ 00:05:33.724 { 00:05:33.724 "dma_device_id": "system", 00:05:33.724 "dma_device_type": 1 00:05:33.724 }, 00:05:33.724 { 00:05:33.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.724 "dma_device_type": 2 00:05:33.724 } 00:05:33.724 ], 00:05:33.724 "driver_specific": {} 00:05:33.724 } 00:05:33.724 ]' 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:33.724 11:55:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.724 00:05:33.724 real 0m0.142s 00:05:33.724 user 0m0.091s 00:05:33.724 sys 0m0.016s 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.724 11:55:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.724 ************************************ 00:05:33.724 END TEST rpc_plugins 00:05:33.724 ************************************ 00:05:33.724 11:55:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:33.724 11:55:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.724 11:55:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.724 11:55:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.724 11:55:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.982 ************************************ 00:05:33.982 START TEST rpc_trace_cmd_test 00:05:33.982 ************************************ 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:33.982 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid942952", 00:05:33.982 "tpoint_group_mask": "0x8", 00:05:33.982 "iscsi_conn": { 00:05:33.982 "mask": "0x2", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "scsi": { 00:05:33.982 "mask": "0x4", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "bdev": { 00:05:33.982 "mask": "0x8", 00:05:33.982 "tpoint_mask": "0xffffffffffffffff" 00:05:33.982 }, 00:05:33.982 "nvmf_rdma": { 00:05:33.982 "mask": "0x10", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "nvmf_tcp": { 00:05:33.982 "mask": "0x20", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "ftl": { 00:05:33.982 "mask": "0x40", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "blobfs": { 00:05:33.982 "mask": "0x80", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "dsa": { 00:05:33.982 "mask": "0x200", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "thread": { 00:05:33.982 "mask": "0x400", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "nvme_pcie": { 00:05:33.982 "mask": "0x800", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "iaa": { 00:05:33.982 "mask": "0x1000", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "nvme_tcp": { 00:05:33.982 "mask": "0x2000", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "bdev_nvme": { 00:05:33.982 "mask": "0x4000", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 }, 00:05:33.982 "sock": { 00:05:33.982 "mask": "0x8000", 00:05:33.982 "tpoint_mask": "0x0" 00:05:33.982 } 00:05:33.982 }' 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.982 00:05:33.982 real 0m0.206s 00:05:33.982 user 0m0.170s 00:05:33.982 sys 0m0.026s 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.982 11:55:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.982 ************************************ 00:05:33.982 END TEST rpc_trace_cmd_test 00:05:33.982 ************************************ 00:05:33.982 11:55:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:33.982 11:55:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.982 11:55:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.982 11:55:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.982 11:55:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.982 11:55:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.982 11:55:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.240 ************************************ 00:05:34.240 START TEST rpc_daemon_integrity 00:05:34.240 ************************************ 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:34.240 { 00:05:34.240 "name": "Malloc2", 00:05:34.240 "aliases": [ 00:05:34.240 "1837ea53-b1fd-40d1-805f-5ad10b6bb335" 00:05:34.240 ], 00:05:34.240 "product_name": "Malloc disk", 00:05:34.240 "block_size": 512, 00:05:34.240 "num_blocks": 16384, 00:05:34.240 "uuid": "1837ea53-b1fd-40d1-805f-5ad10b6bb335", 00:05:34.240 "assigned_rate_limits": { 00:05:34.240 "rw_ios_per_sec": 0, 00:05:34.240 "rw_mbytes_per_sec": 0, 00:05:34.240 "r_mbytes_per_sec": 0, 00:05:34.240 "w_mbytes_per_sec": 0 00:05:34.240 }, 00:05:34.240 "claimed": false, 00:05:34.240 "zoned": false, 00:05:34.240 "supported_io_types": { 00:05:34.240 "read": true, 00:05:34.240 "write": true, 00:05:34.240 "unmap": true, 00:05:34.240 "flush": true, 00:05:34.240 "reset": true, 00:05:34.240 "nvme_admin": false, 00:05:34.240 "nvme_io": false, 00:05:34.240 "nvme_io_md": false, 00:05:34.240 "write_zeroes": true, 00:05:34.240 "zcopy": true, 00:05:34.240 "get_zone_info": false, 00:05:34.240 "zone_management": false, 00:05:34.240 "zone_append": false, 00:05:34.240 "compare": false, 00:05:34.240 "compare_and_write": false, 00:05:34.240 "abort": true, 00:05:34.240 "seek_hole": false, 00:05:34.240 "seek_data": false, 00:05:34.240 "copy": true, 00:05:34.240 "nvme_iov_md": false 00:05:34.240 }, 00:05:34.240 "memory_domains": [ 00:05:34.240 { 00:05:34.240 "dma_device_id": "system", 00:05:34.240 "dma_device_type": 1 00:05:34.240 }, 00:05:34.240 { 00:05:34.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.240 "dma_device_type": 2 00:05:34.240 } 00:05:34.240 ], 00:05:34.240 "driver_specific": {} 00:05:34.240 } 00:05:34.240 ]' 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.240 [2024-07-15 11:55:24.146836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:34.240 [2024-07-15 11:55:24.146862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.240 [2024-07-15 11:55:24.146873] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x840470 00:05:34.240 [2024-07-15 11:55:24.146879] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.240 [2024-07-15 11:55:24.147819] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.240 [2024-07-15 11:55:24.147838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:34.240 Passthru0 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.240 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:34.241 { 00:05:34.241 "name": "Malloc2", 00:05:34.241 "aliases": [ 00:05:34.241 "1837ea53-b1fd-40d1-805f-5ad10b6bb335" 00:05:34.241 ], 00:05:34.241 "product_name": "Malloc disk", 00:05:34.241 "block_size": 512, 00:05:34.241 "num_blocks": 16384, 00:05:34.241 "uuid": "1837ea53-b1fd-40d1-805f-5ad10b6bb335", 00:05:34.241 "assigned_rate_limits": { 00:05:34.241 "rw_ios_per_sec": 0, 00:05:34.241 "rw_mbytes_per_sec": 0, 00:05:34.241 "r_mbytes_per_sec": 0, 00:05:34.241 "w_mbytes_per_sec": 0 00:05:34.241 }, 00:05:34.241 "claimed": true, 00:05:34.241 "claim_type": "exclusive_write", 00:05:34.241 "zoned": false, 00:05:34.241 "supported_io_types": { 00:05:34.241 "read": true, 00:05:34.241 "write": true, 00:05:34.241 "unmap": true, 00:05:34.241 "flush": true, 00:05:34.241 "reset": true, 00:05:34.241 "nvme_admin": false, 00:05:34.241 "nvme_io": false, 00:05:34.241 "nvme_io_md": false, 00:05:34.241 "write_zeroes": true, 00:05:34.241 "zcopy": true, 00:05:34.241 "get_zone_info": false, 00:05:34.241 "zone_management": false, 00:05:34.241 "zone_append": false, 00:05:34.241 "compare": false, 00:05:34.241 "compare_and_write": false, 00:05:34.241 "abort": true, 00:05:34.241 "seek_hole": false, 00:05:34.241 "seek_data": false, 00:05:34.241 "copy": true, 00:05:34.241 "nvme_iov_md": false 00:05:34.241 }, 00:05:34.241 "memory_domains": [ 00:05:34.241 { 00:05:34.241 "dma_device_id": "system", 00:05:34.241 "dma_device_type": 1 00:05:34.241 }, 00:05:34.241 { 00:05:34.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.241 "dma_device_type": 2 00:05:34.241 } 00:05:34.241 ], 00:05:34.241 "driver_specific": {} 00:05:34.241 }, 00:05:34.241 { 00:05:34.241 "name": "Passthru0", 00:05:34.241 "aliases": [ 00:05:34.241 "9f1fd110-e0db-5d6d-9a83-f73a2f9a067a" 00:05:34.241 ], 00:05:34.241 "product_name": "passthru", 00:05:34.241 "block_size": 512, 00:05:34.241 "num_blocks": 16384, 00:05:34.241 "uuid": "9f1fd110-e0db-5d6d-9a83-f73a2f9a067a", 00:05:34.241 "assigned_rate_limits": { 00:05:34.241 "rw_ios_per_sec": 0, 00:05:34.241 "rw_mbytes_per_sec": 0, 00:05:34.241 "r_mbytes_per_sec": 0, 00:05:34.241 "w_mbytes_per_sec": 0 00:05:34.241 }, 00:05:34.241 "claimed": false, 00:05:34.241 "zoned": false, 00:05:34.241 "supported_io_types": { 00:05:34.241 "read": true, 00:05:34.241 "write": true, 00:05:34.241 "unmap": true, 00:05:34.241 "flush": true, 00:05:34.241 "reset": true, 00:05:34.241 "nvme_admin": false, 00:05:34.241 "nvme_io": false, 00:05:34.241 "nvme_io_md": false, 00:05:34.241 "write_zeroes": true, 00:05:34.241 "zcopy": true, 00:05:34.241 "get_zone_info": false, 00:05:34.241 "zone_management": false, 00:05:34.241 "zone_append": false, 00:05:34.241 "compare": false, 00:05:34.241 "compare_and_write": false, 00:05:34.241 "abort": true, 00:05:34.241 "seek_hole": false, 00:05:34.241 "seek_data": false, 00:05:34.241 "copy": true, 00:05:34.241 "nvme_iov_md": false 00:05:34.241 }, 00:05:34.241 "memory_domains": [ 00:05:34.241 { 00:05:34.241 "dma_device_id": "system", 00:05:34.241 "dma_device_type": 1 00:05:34.241 }, 00:05:34.241 { 00:05:34.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:34.241 "dma_device_type": 2 00:05:34.241 } 00:05:34.241 ], 00:05:34.241 "driver_specific": { 00:05:34.241 "passthru": { 00:05:34.241 "name": "Passthru0", 00:05:34.241 "base_bdev_name": "Malloc2" 00:05:34.241 } 00:05:34.241 } 00:05:34.241 } 00:05:34.241 ]' 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:34.241 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:34.499 11:55:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:34.499 00:05:34.499 real 0m0.266s 00:05:34.499 user 0m0.173s 00:05:34.499 sys 0m0.027s 00:05:34.499 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.499 11:55:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:34.499 ************************************ 00:05:34.499 END TEST rpc_daemon_integrity 00:05:34.499 ************************************ 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:34.499 11:55:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:34.499 11:55:24 rpc -- rpc/rpc.sh@84 -- # killprocess 942952 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@948 -- # '[' -z 942952 ']' 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@952 -- # kill -0 942952 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@953 -- # uname 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 942952 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 942952' 00:05:34.499 killing process with pid 942952 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@967 -- # kill 942952 00:05:34.499 11:55:24 rpc -- common/autotest_common.sh@972 -- # wait 942952 00:05:34.758 00:05:34.758 real 0m1.933s 00:05:34.758 user 0m2.507s 00:05:34.758 sys 0m0.631s 00:05:34.758 11:55:24 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.758 11:55:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.758 ************************************ 00:05:34.758 END TEST rpc 00:05:34.758 ************************************ 00:05:34.758 11:55:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.758 11:55:24 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.758 11:55:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.758 11:55:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.758 11:55:24 -- common/autotest_common.sh@10 -- # set +x 00:05:34.758 ************************************ 00:05:34.758 START TEST skip_rpc 00:05:34.758 ************************************ 00:05:34.758 11:55:24 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:35.016 * Looking for test storage... 00:05:35.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:35.016 11:55:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.016 11:55:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:35.016 11:55:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:35.016 11:55:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.016 11:55:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.016 11:55:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.016 ************************************ 00:05:35.016 START TEST skip_rpc 00:05:35.016 ************************************ 00:05:35.016 11:55:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:35.016 11:55:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=943579 00:05:35.016 11:55:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:35.016 11:55:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.016 11:55:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:35.016 [2024-07-15 11:55:24.891940] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:05:35.016 [2024-07-15 11:55:24.891983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid943579 ] 00:05:35.016 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.016 [2024-07-15 11:55:24.957613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.016 [2024-07-15 11:55:24.997964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 943579 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 943579 ']' 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 943579 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 943579 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 943579' 00:05:40.341 killing process with pid 943579 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 943579 00:05:40.341 11:55:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 943579 00:05:40.341 00:05:40.341 real 0m5.358s 00:05:40.341 user 0m5.116s 00:05:40.341 sys 0m0.274s 00:05:40.341 11:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.341 11:55:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.341 ************************************ 00:05:40.341 END TEST skip_rpc 00:05:40.341 ************************************ 00:05:40.341 11:55:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:40.341 11:55:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:40.341 11:55:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.341 11:55:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.341 11:55:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.341 ************************************ 00:05:40.341 START TEST skip_rpc_with_json 00:05:40.341 ************************************ 00:05:40.341 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:40.341 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:40.341 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=944485 00:05:40.341 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.341 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.341 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 944485 00:05:40.341 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 944485 ']' 00:05:40.341 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.341 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.342 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.342 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.342 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.342 [2024-07-15 11:55:30.319681] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:05:40.342 [2024-07-15 11:55:30.319726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944485 ] 00:05:40.342 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.600 [2024-07-15 11:55:30.387450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.600 [2024-07-15 11:55:30.427353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.859 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.859 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:40.859 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:40.859 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.859 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.860 [2024-07-15 11:55:30.627617] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:40.860 request: 00:05:40.860 { 00:05:40.860 "trtype": "tcp", 00:05:40.860 "method": "nvmf_get_transports", 00:05:40.860 "req_id": 1 00:05:40.860 } 00:05:40.860 Got JSON-RPC error response 00:05:40.860 response: 00:05:40.860 { 00:05:40.860 "code": -19, 00:05:40.860 "message": "No such device" 00:05:40.860 } 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.860 [2024-07-15 11:55:30.639722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.860 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.860 { 00:05:40.860 "subsystems": [ 00:05:40.860 { 00:05:40.860 "subsystem": "vfio_user_target", 00:05:40.860 "config": null 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "keyring", 00:05:40.860 "config": [] 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "iobuf", 00:05:40.860 "config": [ 00:05:40.860 { 00:05:40.860 "method": "iobuf_set_options", 00:05:40.860 "params": { 00:05:40.860 "small_pool_count": 8192, 00:05:40.860 "large_pool_count": 1024, 00:05:40.860 "small_bufsize": 8192, 00:05:40.860 "large_bufsize": 135168 00:05:40.860 } 00:05:40.860 } 00:05:40.860 ] 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "sock", 00:05:40.860 "config": [ 00:05:40.860 { 00:05:40.860 "method": "sock_set_default_impl", 00:05:40.860 "params": { 00:05:40.860 "impl_name": "posix" 00:05:40.860 } 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "method": "sock_impl_set_options", 00:05:40.860 "params": { 00:05:40.860 "impl_name": "ssl", 00:05:40.860 "recv_buf_size": 4096, 00:05:40.860 "send_buf_size": 4096, 00:05:40.860 "enable_recv_pipe": true, 00:05:40.860 "enable_quickack": false, 00:05:40.860 "enable_placement_id": 0, 00:05:40.860 "enable_zerocopy_send_server": true, 00:05:40.860 "enable_zerocopy_send_client": false, 00:05:40.860 "zerocopy_threshold": 0, 00:05:40.860 "tls_version": 0, 00:05:40.860 "enable_ktls": false 00:05:40.860 } 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "method": "sock_impl_set_options", 00:05:40.860 "params": { 00:05:40.860 "impl_name": "posix", 00:05:40.860 "recv_buf_size": 2097152, 00:05:40.860 "send_buf_size": 2097152, 00:05:40.860 "enable_recv_pipe": true, 00:05:40.860 "enable_quickack": false, 00:05:40.860 "enable_placement_id": 0, 00:05:40.860 "enable_zerocopy_send_server": true, 00:05:40.860 "enable_zerocopy_send_client": false, 00:05:40.860 "zerocopy_threshold": 0, 00:05:40.860 "tls_version": 0, 00:05:40.860 "enable_ktls": false 00:05:40.860 } 00:05:40.860 } 00:05:40.860 ] 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "vmd", 00:05:40.860 "config": [] 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "accel", 00:05:40.860 "config": [ 00:05:40.860 { 00:05:40.860 "method": "accel_set_options", 00:05:40.860 "params": { 00:05:40.860 "small_cache_size": 128, 00:05:40.860 "large_cache_size": 16, 00:05:40.860 "task_count": 2048, 00:05:40.860 "sequence_count": 2048, 00:05:40.860 "buf_count": 2048 00:05:40.860 } 00:05:40.860 } 00:05:40.860 ] 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "bdev", 00:05:40.860 "config": [ 00:05:40.860 { 00:05:40.860 "method": "bdev_set_options", 00:05:40.860 "params": { 00:05:40.860 "bdev_io_pool_size": 65535, 00:05:40.860 "bdev_io_cache_size": 256, 00:05:40.860 "bdev_auto_examine": true, 00:05:40.860 "iobuf_small_cache_size": 128, 00:05:40.860 "iobuf_large_cache_size": 16 00:05:40.860 } 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "method": "bdev_raid_set_options", 00:05:40.860 "params": { 00:05:40.860 "process_window_size_kb": 1024 00:05:40.860 } 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "method": "bdev_iscsi_set_options", 00:05:40.860 "params": { 00:05:40.860 "timeout_sec": 30 00:05:40.860 } 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "method": "bdev_nvme_set_options", 00:05:40.860 "params": { 00:05:40.860 "action_on_timeout": "none", 00:05:40.860 "timeout_us": 0, 00:05:40.860 "timeout_admin_us": 0, 00:05:40.860 "keep_alive_timeout_ms": 10000, 00:05:40.860 "arbitration_burst": 0, 00:05:40.860 "low_priority_weight": 0, 00:05:40.860 "medium_priority_weight": 0, 00:05:40.860 "high_priority_weight": 0, 00:05:40.860 "nvme_adminq_poll_period_us": 10000, 00:05:40.860 "nvme_ioq_poll_period_us": 0, 00:05:40.860 "io_queue_requests": 0, 00:05:40.860 "delay_cmd_submit": true, 00:05:40.860 "transport_retry_count": 4, 00:05:40.860 "bdev_retry_count": 3, 00:05:40.860 "transport_ack_timeout": 0, 00:05:40.860 "ctrlr_loss_timeout_sec": 0, 00:05:40.860 "reconnect_delay_sec": 0, 00:05:40.860 "fast_io_fail_timeout_sec": 0, 00:05:40.860 "disable_auto_failback": false, 00:05:40.860 "generate_uuids": false, 00:05:40.860 "transport_tos": 0, 00:05:40.860 "nvme_error_stat": false, 00:05:40.860 "rdma_srq_size": 0, 00:05:40.860 "io_path_stat": false, 00:05:40.860 "allow_accel_sequence": false, 00:05:40.860 "rdma_max_cq_size": 0, 00:05:40.860 "rdma_cm_event_timeout_ms": 0, 00:05:40.860 "dhchap_digests": [ 00:05:40.860 "sha256", 00:05:40.860 "sha384", 00:05:40.860 "sha512" 00:05:40.860 ], 00:05:40.860 "dhchap_dhgroups": [ 00:05:40.860 "null", 00:05:40.860 "ffdhe2048", 00:05:40.860 "ffdhe3072", 00:05:40.860 "ffdhe4096", 00:05:40.860 "ffdhe6144", 00:05:40.860 "ffdhe8192" 00:05:40.860 ] 00:05:40.860 } 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "method": "bdev_nvme_set_hotplug", 00:05:40.860 "params": { 00:05:40.860 "period_us": 100000, 00:05:40.860 "enable": false 00:05:40.860 } 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "method": "bdev_wait_for_examine" 00:05:40.860 } 00:05:40.860 ] 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "scsi", 00:05:40.860 "config": null 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "scheduler", 00:05:40.860 "config": [ 00:05:40.860 { 00:05:40.860 "method": "framework_set_scheduler", 00:05:40.860 "params": { 00:05:40.860 "name": "static" 00:05:40.860 } 00:05:40.860 } 00:05:40.860 ] 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "vhost_scsi", 00:05:40.860 "config": [] 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "vhost_blk", 00:05:40.860 "config": [] 00:05:40.860 }, 00:05:40.860 { 00:05:40.860 "subsystem": "ublk", 00:05:40.861 "config": [] 00:05:40.861 }, 00:05:40.861 { 00:05:40.861 "subsystem": "nbd", 00:05:40.861 "config": [] 00:05:40.861 }, 00:05:40.861 { 00:05:40.861 "subsystem": "nvmf", 00:05:40.861 "config": [ 00:05:40.861 { 00:05:40.861 "method": "nvmf_set_config", 00:05:40.861 "params": { 00:05:40.861 "discovery_filter": "match_any", 00:05:40.861 "admin_cmd_passthru": { 00:05:40.861 "identify_ctrlr": false 00:05:40.861 } 00:05:40.861 } 00:05:40.861 }, 00:05:40.861 { 00:05:40.861 "method": "nvmf_set_max_subsystems", 00:05:40.861 "params": { 00:05:40.861 "max_subsystems": 1024 00:05:40.861 } 00:05:40.861 }, 00:05:40.861 { 00:05:40.861 "method": "nvmf_set_crdt", 00:05:40.861 "params": { 00:05:40.861 "crdt1": 0, 00:05:40.861 "crdt2": 0, 00:05:40.861 "crdt3": 0 00:05:40.861 } 00:05:40.861 }, 00:05:40.861 { 00:05:40.861 "method": "nvmf_create_transport", 00:05:40.861 "params": { 00:05:40.861 "trtype": "TCP", 00:05:40.861 "max_queue_depth": 128, 00:05:40.861 "max_io_qpairs_per_ctrlr": 127, 00:05:40.861 "in_capsule_data_size": 4096, 00:05:40.861 "max_io_size": 131072, 00:05:40.861 "io_unit_size": 131072, 00:05:40.861 "max_aq_depth": 128, 00:05:40.861 "num_shared_buffers": 511, 00:05:40.861 "buf_cache_size": 4294967295, 00:05:40.861 "dif_insert_or_strip": false, 00:05:40.861 "zcopy": false, 00:05:40.861 "c2h_success": true, 00:05:40.861 "sock_priority": 0, 00:05:40.861 "abort_timeout_sec": 1, 00:05:40.861 "ack_timeout": 0, 00:05:40.861 "data_wr_pool_size": 0 00:05:40.861 } 00:05:40.861 } 00:05:40.861 ] 00:05:40.861 }, 00:05:40.861 { 00:05:40.861 "subsystem": "iscsi", 00:05:40.861 "config": [ 00:05:40.861 { 00:05:40.861 "method": "iscsi_set_options", 00:05:40.861 "params": { 00:05:40.861 "node_base": "iqn.2016-06.io.spdk", 00:05:40.861 "max_sessions": 128, 00:05:40.861 "max_connections_per_session": 2, 00:05:40.861 "max_queue_depth": 64, 00:05:40.861 "default_time2wait": 2, 00:05:40.861 "default_time2retain": 20, 00:05:40.861 "first_burst_length": 8192, 00:05:40.861 "immediate_data": true, 00:05:40.861 "allow_duplicated_isid": false, 00:05:40.861 "error_recovery_level": 0, 00:05:40.861 "nop_timeout": 60, 00:05:40.861 "nop_in_interval": 30, 00:05:40.861 "disable_chap": false, 00:05:40.861 "require_chap": false, 00:05:40.861 "mutual_chap": false, 00:05:40.861 "chap_group": 0, 00:05:40.861 "max_large_datain_per_connection": 64, 00:05:40.861 "max_r2t_per_connection": 4, 00:05:40.861 "pdu_pool_size": 36864, 00:05:40.861 "immediate_data_pool_size": 16384, 00:05:40.861 "data_out_pool_size": 2048 00:05:40.861 } 00:05:40.861 } 00:05:40.861 ] 00:05:40.861 } 00:05:40.861 ] 00:05:40.861 } 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 944485 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 944485 ']' 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 944485 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944485 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944485' 00:05:40.861 killing process with pid 944485 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 944485 00:05:40.861 11:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 944485 00:05:41.427 11:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=944545 00:05:41.427 11:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:41.427 11:55:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 944545 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 944545 ']' 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 944545 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944545 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944545' 00:05:46.697 killing process with pid 944545 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 944545 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 944545 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:46.697 00:05:46.697 real 0m6.229s 00:05:46.697 user 0m5.914s 00:05:46.697 sys 0m0.585s 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.697 ************************************ 00:05:46.697 END TEST skip_rpc_with_json 00:05:46.697 ************************************ 00:05:46.697 11:55:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.697 11:55:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:46.697 11:55:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.697 11:55:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.697 11:55:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.697 ************************************ 00:05:46.697 START TEST skip_rpc_with_delay 00:05:46.697 ************************************ 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.697 [2024-07-15 11:55:36.614728] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:46.697 [2024-07-15 11:55:36.614786] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:46.697 00:05:46.697 real 0m0.064s 00:05:46.697 user 0m0.039s 00:05:46.697 sys 0m0.025s 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.697 11:55:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:46.697 ************************************ 00:05:46.697 END TEST skip_rpc_with_delay 00:05:46.697 ************************************ 00:05:46.697 11:55:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.697 11:55:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:46.697 11:55:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:46.697 11:55:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:46.697 11:55:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.697 11:55:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.697 11:55:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.697 ************************************ 00:05:46.697 START TEST exit_on_failed_rpc_init 00:05:46.697 ************************************ 00:05:46.697 11:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:46.698 11:55:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=945516 00:05:46.957 11:55:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 945516 00:05:46.957 11:55:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.957 11:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 945516 ']' 00:05:46.957 11:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.957 11:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.957 11:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.957 11:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.957 11:55:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.957 [2024-07-15 11:55:36.747103] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:05:46.957 [2024-07-15 11:55:36.747144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945516 ] 00:05:46.957 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.957 [2024-07-15 11:55:36.797400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.957 [2024-07-15 11:55:36.838964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:47.216 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.216 [2024-07-15 11:55:37.088523] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:05:47.216 [2024-07-15 11:55:37.088571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945615 ] 00:05:47.216 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.216 [2024-07-15 11:55:37.153761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.216 [2024-07-15 11:55:37.193817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.216 [2024-07-15 11:55:37.193881] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:47.216 [2024-07-15 11:55:37.193890] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:47.216 [2024-07-15 11:55:37.193896] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 945516 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 945516 ']' 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 945516 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 945516 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 945516' 00:05:47.476 killing process with pid 945516 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 945516 00:05:47.476 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 945516 00:05:47.735 00:05:47.735 real 0m0.906s 00:05:47.735 user 0m0.951s 00:05:47.735 sys 0m0.382s 00:05:47.735 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.735 11:55:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.735 ************************************ 00:05:47.735 END TEST exit_on_failed_rpc_init 00:05:47.735 ************************************ 00:05:47.735 11:55:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:47.735 11:55:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.735 00:05:47.735 real 0m12.922s 00:05:47.735 user 0m12.173s 00:05:47.735 sys 0m1.504s 00:05:47.735 11:55:37 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.735 11:55:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.735 ************************************ 00:05:47.735 END TEST skip_rpc 00:05:47.735 ************************************ 00:05:47.735 11:55:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.735 11:55:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:47.735 11:55:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.735 11:55:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.735 11:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:47.735 ************************************ 00:05:47.735 START TEST rpc_client 00:05:47.735 ************************************ 00:05:47.735 11:55:37 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:47.995 * Looking for test storage... 00:05:47.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:47.995 11:55:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:47.995 OK 00:05:47.995 11:55:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:47.995 00:05:47.995 real 0m0.115s 00:05:47.995 user 0m0.051s 00:05:47.995 sys 0m0.072s 00:05:47.995 11:55:37 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.995 11:55:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:47.995 ************************************ 00:05:47.995 END TEST rpc_client 00:05:47.995 ************************************ 00:05:47.995 11:55:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.995 11:55:37 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.995 11:55:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.995 11:55:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.995 11:55:37 -- common/autotest_common.sh@10 -- # set +x 00:05:47.995 ************************************ 00:05:47.995 START TEST json_config 00:05:47.995 ************************************ 00:05:47.995 11:55:37 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.995 11:55:37 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.995 11:55:37 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.995 11:55:37 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.995 11:55:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.995 11:55:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.995 11:55:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.995 11:55:37 json_config -- paths/export.sh@5 -- # export PATH 00:05:47.995 11:55:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@47 -- # : 0 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:47.995 11:55:37 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:47.995 INFO: JSON configuration test init 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:47.995 11:55:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.995 11:55:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:47.995 11:55:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.995 11:55:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.995 11:55:37 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:47.995 11:55:37 json_config -- json_config/common.sh@9 -- # local app=target 00:05:47.995 11:55:37 json_config -- json_config/common.sh@10 -- # shift 00:05:47.995 11:55:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:47.995 11:55:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:47.995 11:55:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:47.995 11:55:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.995 11:55:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:47.995 11:55:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=945864 00:05:47.995 11:55:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:47.995 Waiting for target to run... 00:05:48.255 11:55:37 json_config -- json_config/common.sh@25 -- # waitforlisten 945864 /var/tmp/spdk_tgt.sock 00:05:48.255 11:55:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 945864 ']' 00:05:48.255 11:55:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.256 11:55:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:48.256 11:55:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.256 11:55:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.256 11:55:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.256 11:55:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.256 [2024-07-15 11:55:38.044123] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:05:48.256 [2024-07-15 11:55:38.044170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945864 ] 00:05:48.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.515 [2024-07-15 11:55:38.330714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.515 [2024-07-15 11:55:38.355797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.083 11:55:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.083 11:55:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:49.083 11:55:38 json_config -- json_config/common.sh@26 -- # echo '' 00:05:49.083 00:05:49.083 11:55:38 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:49.083 11:55:38 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:49.083 11:55:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.083 11:55:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.083 11:55:38 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:49.083 11:55:38 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:49.083 11:55:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.083 11:55:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.083 11:55:38 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:49.083 11:55:38 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:49.083 11:55:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:52.366 11:55:41 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:52.366 11:55:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:52.366 11:55:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.366 11:55:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.366 11:55:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:52.366 11:55:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:52.366 11:55:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:52.366 11:55:41 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:52.366 11:55:41 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:52.366 11:55:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:52.366 11:55:42 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:52.366 11:55:42 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:52.366 11:55:42 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:52.366 11:55:42 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:52.366 11:55:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.366 11:55:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.366 11:55:42 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:52.366 11:55:42 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:52.366 11:55:42 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:52.367 11:55:42 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:52.367 11:55:42 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:52.367 11:55:42 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:52.367 11:55:42 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:52.367 11:55:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.367 11:55:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.367 11:55:42 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:52.367 11:55:42 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:52.367 11:55:42 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:52.367 11:55:42 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.367 11:55:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.367 MallocForNvmf0 00:05:52.367 11:55:42 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:52.367 11:55:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:52.623 MallocForNvmf1 00:05:52.623 11:55:42 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.623 11:55:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.881 [2024-07-15 11:55:42.685288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:52.881 11:55:42 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:52.881 11:55:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:53.140 11:55:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:53.140 11:55:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:53.140 11:55:43 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:53.140 11:55:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:53.398 11:55:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.398 11:55:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.399 [2024-07-15 11:55:43.399477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:53.658 11:55:43 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:53.658 11:55:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.658 11:55:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.658 11:55:43 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:53.658 11:55:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.658 11:55:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.658 11:55:43 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:53.658 11:55:43 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:53.658 11:55:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:53.658 MallocBdevForConfigChangeCheck 00:05:53.917 11:55:43 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:53.917 11:55:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.917 11:55:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.917 11:55:43 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:53.917 11:55:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.176 11:55:44 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:54.176 INFO: shutting down applications... 00:05:54.176 11:55:44 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:54.176 11:55:44 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:54.176 11:55:44 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:54.176 11:55:44 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:56.077 Calling clear_iscsi_subsystem 00:05:56.077 Calling clear_nvmf_subsystem 00:05:56.077 Calling clear_nbd_subsystem 00:05:56.077 Calling clear_ublk_subsystem 00:05:56.077 Calling clear_vhost_blk_subsystem 00:05:56.077 Calling clear_vhost_scsi_subsystem 00:05:56.077 Calling clear_bdev_subsystem 00:05:56.077 11:55:45 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:56.077 11:55:45 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:56.077 11:55:45 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:56.077 11:55:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.077 11:55:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:56.077 11:55:45 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:56.077 11:55:45 json_config -- json_config/json_config.sh@345 -- # break 00:05:56.077 11:55:45 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:56.077 11:55:45 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:56.077 11:55:45 json_config -- json_config/common.sh@31 -- # local app=target 00:05:56.077 11:55:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:56.077 11:55:45 json_config -- json_config/common.sh@35 -- # [[ -n 945864 ]] 00:05:56.077 11:55:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 945864 00:05:56.077 11:55:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:56.077 11:55:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.077 11:55:45 json_config -- json_config/common.sh@41 -- # kill -0 945864 00:05:56.077 11:55:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:56.644 11:55:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:56.644 11:55:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:56.644 11:55:46 json_config -- json_config/common.sh@41 -- # kill -0 945864 00:05:56.644 11:55:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:56.644 11:55:46 json_config -- json_config/common.sh@43 -- # break 00:05:56.644 11:55:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:56.644 11:55:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:56.644 SPDK target shutdown done 00:05:56.644 11:55:46 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:56.644 INFO: relaunching applications... 00:05:56.644 11:55:46 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.644 11:55:46 json_config -- json_config/common.sh@9 -- # local app=target 00:05:56.644 11:55:46 json_config -- json_config/common.sh@10 -- # shift 00:05:56.644 11:55:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.645 11:55:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.645 11:55:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.645 11:55:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.645 11:55:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.645 11:55:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=947384 00:05:56.645 11:55:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.645 Waiting for target to run... 00:05:56.645 11:55:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.645 11:55:46 json_config -- json_config/common.sh@25 -- # waitforlisten 947384 /var/tmp/spdk_tgt.sock 00:05:56.645 11:55:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 947384 ']' 00:05:56.645 11:55:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.645 11:55:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.645 11:55:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.645 11:55:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.645 11:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.645 [2024-07-15 11:55:46.477954] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:05:56.645 [2024-07-15 11:55:46.478016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947384 ] 00:05:56.645 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.904 [2024-07-15 11:55:46.771845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.904 [2024-07-15 11:55:46.796607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.186 [2024-07-15 11:55:49.791108] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.187 [2024-07-15 11:55:49.823413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:00.187 11:55:49 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.187 11:55:49 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:00.187 11:55:49 json_config -- json_config/common.sh@26 -- # echo '' 00:06:00.187 00:06:00.187 11:55:49 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:00.187 11:55:49 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:00.187 INFO: Checking if target configuration is the same... 00:06:00.187 11:55:49 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.187 11:55:49 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:00.187 11:55:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.187 + '[' 2 -ne 2 ']' 00:06:00.187 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:00.187 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:00.187 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.187 +++ basename /dev/fd/62 00:06:00.187 ++ mktemp /tmp/62.XXX 00:06:00.187 + tmp_file_1=/tmp/62.a7J 00:06:00.187 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.187 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:00.187 + tmp_file_2=/tmp/spdk_tgt_config.json.6X7 00:06:00.187 + ret=0 00:06:00.187 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.445 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.445 + diff -u /tmp/62.a7J /tmp/spdk_tgt_config.json.6X7 00:06:00.445 + echo 'INFO: JSON config files are the same' 00:06:00.445 INFO: JSON config files are the same 00:06:00.445 + rm /tmp/62.a7J /tmp/spdk_tgt_config.json.6X7 00:06:00.445 + exit 0 00:06:00.445 11:55:50 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:00.445 11:55:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:00.445 INFO: changing configuration and checking if this can be detected... 00:06:00.445 11:55:50 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:00.446 11:55:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:00.446 11:55:50 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.446 11:55:50 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:00.446 11:55:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.446 + '[' 2 -ne 2 ']' 00:06:00.446 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:00.446 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:00.446 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:00.446 +++ basename /dev/fd/62 00:06:00.446 ++ mktemp /tmp/62.XXX 00:06:00.446 + tmp_file_1=/tmp/62.EQG 00:06:00.446 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:00.446 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:00.446 + tmp_file_2=/tmp/spdk_tgt_config.json.SoS 00:06:00.446 + ret=0 00:06:00.446 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:01.032 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:01.032 + diff -u /tmp/62.EQG /tmp/spdk_tgt_config.json.SoS 00:06:01.032 + ret=1 00:06:01.032 + echo '=== Start of file: /tmp/62.EQG ===' 00:06:01.032 + cat /tmp/62.EQG 00:06:01.032 + echo '=== End of file: /tmp/62.EQG ===' 00:06:01.032 + echo '' 00:06:01.032 + echo '=== Start of file: /tmp/spdk_tgt_config.json.SoS ===' 00:06:01.032 + cat /tmp/spdk_tgt_config.json.SoS 00:06:01.032 + echo '=== End of file: /tmp/spdk_tgt_config.json.SoS ===' 00:06:01.032 + echo '' 00:06:01.032 + rm /tmp/62.EQG /tmp/spdk_tgt_config.json.SoS 00:06:01.032 + exit 1 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:01.032 INFO: configuration change detected. 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@317 -- # [[ -n 947384 ]] 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.032 11:55:50 json_config -- json_config/json_config.sh@323 -- # killprocess 947384 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@948 -- # '[' -z 947384 ']' 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@952 -- # kill -0 947384 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@953 -- # uname 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 947384 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 947384' 00:06:01.032 killing process with pid 947384 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@967 -- # kill 947384 00:06:01.032 11:55:50 json_config -- common/autotest_common.sh@972 -- # wait 947384 00:06:02.446 11:55:52 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.446 11:55:52 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:02.446 11:55:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.446 11:55:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.446 11:55:52 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:02.446 11:55:52 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:02.446 INFO: Success 00:06:02.446 00:06:02.446 real 0m14.543s 00:06:02.446 user 0m15.481s 00:06:02.446 sys 0m1.704s 00:06:02.446 11:55:52 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.446 11:55:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.446 ************************************ 00:06:02.446 END TEST json_config 00:06:02.446 ************************************ 00:06:02.706 11:55:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.706 11:55:52 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:02.706 11:55:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.706 11:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.706 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:06:02.706 ************************************ 00:06:02.706 START TEST json_config_extra_key 00:06:02.706 ************************************ 00:06:02.706 11:55:52 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.706 11:55:52 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.706 11:55:52 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.706 11:55:52 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.706 11:55:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.706 11:55:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.706 11:55:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.706 11:55:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:02.706 11:55:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.706 11:55:52 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:02.706 INFO: launching applications... 00:06:02.706 11:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=948641 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.706 Waiting for target to run... 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 948641 /var/tmp/spdk_tgt.sock 00:06:02.706 11:55:52 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 948641 ']' 00:06:02.706 11:55:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:02.706 11:55:52 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.706 11:55:52 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.706 11:55:52 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.706 11:55:52 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.706 11:55:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.706 [2024-07-15 11:55:52.647249] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:02.706 [2024-07-15 11:55:52.647297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948641 ] 00:06:02.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.275 [2024-07-15 11:55:53.089418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.275 [2024-07-15 11:55:53.122592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.533 11:55:53 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.533 11:55:53 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:03.533 11:55:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:03.533 00:06:03.533 11:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:03.533 INFO: shutting down applications... 00:06:03.533 11:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:03.533 11:55:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:03.533 11:55:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:03.533 11:55:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 948641 ]] 00:06:03.533 11:55:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 948641 00:06:03.533 11:55:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:03.533 11:55:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.533 11:55:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 948641 00:06:03.533 11:55:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.100 11:55:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.100 11:55:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.100 11:55:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 948641 00:06:04.100 11:55:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:04.100 11:55:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:04.100 11:55:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:04.100 11:55:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:04.100 SPDK target shutdown done 00:06:04.100 11:55:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:04.100 Success 00:06:04.100 00:06:04.100 real 0m1.460s 00:06:04.100 user 0m1.080s 00:06:04.100 sys 0m0.538s 00:06:04.100 11:55:53 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.100 11:55:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.100 ************************************ 00:06:04.100 END TEST json_config_extra_key 00:06:04.100 ************************************ 00:06:04.100 11:55:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:04.100 11:55:53 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.100 11:55:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.100 11:55:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.100 11:55:53 -- common/autotest_common.sh@10 -- # set +x 00:06:04.100 ************************************ 00:06:04.100 START TEST alias_rpc 00:06:04.100 ************************************ 00:06:04.100 11:55:54 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.360 * Looking for test storage... 00:06:04.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:04.360 11:55:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.360 11:55:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=948924 00:06:04.360 11:55:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 948924 00:06:04.360 11:55:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:04.360 11:55:54 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 948924 ']' 00:06:04.360 11:55:54 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.360 11:55:54 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.360 11:55:54 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.360 11:55:54 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.360 11:55:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.360 [2024-07-15 11:55:54.171360] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:04.360 [2024-07-15 11:55:54.171410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948924 ] 00:06:04.360 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.360 [2024-07-15 11:55:54.238174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.360 [2024-07-15 11:55:54.278017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.295 11:55:54 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.295 11:55:54 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:05.295 11:55:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:05.295 11:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 948924 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 948924 ']' 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 948924 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 948924 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 948924' 00:06:05.295 killing process with pid 948924 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@967 -- # kill 948924 00:06:05.295 11:55:55 alias_rpc -- common/autotest_common.sh@972 -- # wait 948924 00:06:05.554 00:06:05.554 real 0m1.501s 00:06:05.554 user 0m1.672s 00:06:05.554 sys 0m0.396s 00:06:05.554 11:55:55 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.554 11:55:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.554 ************************************ 00:06:05.554 END TEST alias_rpc 00:06:05.554 ************************************ 00:06:05.811 11:55:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.811 11:55:55 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:05.811 11:55:55 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:05.811 11:55:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.811 11:55:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.812 11:55:55 -- common/autotest_common.sh@10 -- # set +x 00:06:05.812 ************************************ 00:06:05.812 START TEST spdkcli_tcp 00:06:05.812 ************************************ 00:06:05.812 11:55:55 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:05.812 * Looking for test storage... 00:06:05.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:05.812 11:55:55 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.812 11:55:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=949211 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:05.812 11:55:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 949211 00:06:05.812 11:55:55 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 949211 ']' 00:06:05.812 11:55:55 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.812 11:55:55 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.812 11:55:55 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.812 11:55:55 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.812 11:55:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.812 [2024-07-15 11:55:55.745338] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:05.812 [2024-07-15 11:55:55.745383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949211 ] 00:06:05.812 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.070 [2024-07-15 11:55:55.814153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.070 [2024-07-15 11:55:55.855722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.070 [2024-07-15 11:55:55.855724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.070 11:55:56 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.070 11:55:56 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:06.070 11:55:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=949218 00:06:06.070 11:55:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:06.070 11:55:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:06.330 [ 00:06:06.330 "bdev_malloc_delete", 00:06:06.330 "bdev_malloc_create", 00:06:06.330 "bdev_null_resize", 00:06:06.330 "bdev_null_delete", 00:06:06.330 "bdev_null_create", 00:06:06.330 "bdev_nvme_cuse_unregister", 00:06:06.330 "bdev_nvme_cuse_register", 00:06:06.330 "bdev_opal_new_user", 00:06:06.330 "bdev_opal_set_lock_state", 00:06:06.330 "bdev_opal_delete", 00:06:06.330 "bdev_opal_get_info", 00:06:06.330 "bdev_opal_create", 00:06:06.330 "bdev_nvme_opal_revert", 00:06:06.330 "bdev_nvme_opal_init", 00:06:06.330 "bdev_nvme_send_cmd", 00:06:06.330 "bdev_nvme_get_path_iostat", 00:06:06.330 "bdev_nvme_get_mdns_discovery_info", 00:06:06.330 "bdev_nvme_stop_mdns_discovery", 00:06:06.330 "bdev_nvme_start_mdns_discovery", 00:06:06.330 "bdev_nvme_set_multipath_policy", 00:06:06.330 "bdev_nvme_set_preferred_path", 00:06:06.330 "bdev_nvme_get_io_paths", 00:06:06.330 "bdev_nvme_remove_error_injection", 00:06:06.330 "bdev_nvme_add_error_injection", 00:06:06.330 "bdev_nvme_get_discovery_info", 00:06:06.330 "bdev_nvme_stop_discovery", 00:06:06.330 "bdev_nvme_start_discovery", 00:06:06.330 "bdev_nvme_get_controller_health_info", 00:06:06.330 "bdev_nvme_disable_controller", 00:06:06.330 "bdev_nvme_enable_controller", 00:06:06.330 "bdev_nvme_reset_controller", 00:06:06.330 "bdev_nvme_get_transport_statistics", 00:06:06.330 "bdev_nvme_apply_firmware", 00:06:06.330 "bdev_nvme_detach_controller", 00:06:06.330 "bdev_nvme_get_controllers", 00:06:06.330 "bdev_nvme_attach_controller", 00:06:06.330 "bdev_nvme_set_hotplug", 00:06:06.330 "bdev_nvme_set_options", 00:06:06.330 "bdev_passthru_delete", 00:06:06.330 "bdev_passthru_create", 00:06:06.330 "bdev_lvol_set_parent_bdev", 00:06:06.330 "bdev_lvol_set_parent", 00:06:06.330 "bdev_lvol_check_shallow_copy", 00:06:06.330 "bdev_lvol_start_shallow_copy", 00:06:06.330 "bdev_lvol_grow_lvstore", 00:06:06.330 "bdev_lvol_get_lvols", 00:06:06.330 "bdev_lvol_get_lvstores", 00:06:06.330 "bdev_lvol_delete", 00:06:06.330 "bdev_lvol_set_read_only", 00:06:06.330 "bdev_lvol_resize", 00:06:06.330 "bdev_lvol_decouple_parent", 00:06:06.330 "bdev_lvol_inflate", 00:06:06.330 "bdev_lvol_rename", 00:06:06.330 "bdev_lvol_clone_bdev", 00:06:06.330 "bdev_lvol_clone", 00:06:06.330 "bdev_lvol_snapshot", 00:06:06.330 "bdev_lvol_create", 00:06:06.330 "bdev_lvol_delete_lvstore", 00:06:06.330 "bdev_lvol_rename_lvstore", 00:06:06.330 "bdev_lvol_create_lvstore", 00:06:06.330 "bdev_raid_set_options", 00:06:06.330 "bdev_raid_remove_base_bdev", 00:06:06.330 "bdev_raid_add_base_bdev", 00:06:06.330 "bdev_raid_delete", 00:06:06.330 "bdev_raid_create", 00:06:06.330 "bdev_raid_get_bdevs", 00:06:06.330 "bdev_error_inject_error", 00:06:06.330 "bdev_error_delete", 00:06:06.330 "bdev_error_create", 00:06:06.330 "bdev_split_delete", 00:06:06.330 "bdev_split_create", 00:06:06.330 "bdev_delay_delete", 00:06:06.330 "bdev_delay_create", 00:06:06.330 "bdev_delay_update_latency", 00:06:06.330 "bdev_zone_block_delete", 00:06:06.330 "bdev_zone_block_create", 00:06:06.330 "blobfs_create", 00:06:06.330 "blobfs_detect", 00:06:06.330 "blobfs_set_cache_size", 00:06:06.330 "bdev_aio_delete", 00:06:06.330 "bdev_aio_rescan", 00:06:06.330 "bdev_aio_create", 00:06:06.330 "bdev_ftl_set_property", 00:06:06.330 "bdev_ftl_get_properties", 00:06:06.330 "bdev_ftl_get_stats", 00:06:06.330 "bdev_ftl_unmap", 00:06:06.330 "bdev_ftl_unload", 00:06:06.330 "bdev_ftl_delete", 00:06:06.330 "bdev_ftl_load", 00:06:06.330 "bdev_ftl_create", 00:06:06.330 "bdev_virtio_attach_controller", 00:06:06.330 "bdev_virtio_scsi_get_devices", 00:06:06.330 "bdev_virtio_detach_controller", 00:06:06.330 "bdev_virtio_blk_set_hotplug", 00:06:06.330 "bdev_iscsi_delete", 00:06:06.330 "bdev_iscsi_create", 00:06:06.330 "bdev_iscsi_set_options", 00:06:06.330 "accel_error_inject_error", 00:06:06.330 "ioat_scan_accel_module", 00:06:06.330 "dsa_scan_accel_module", 00:06:06.330 "iaa_scan_accel_module", 00:06:06.330 "vfu_virtio_create_scsi_endpoint", 00:06:06.330 "vfu_virtio_scsi_remove_target", 00:06:06.330 "vfu_virtio_scsi_add_target", 00:06:06.330 "vfu_virtio_create_blk_endpoint", 00:06:06.330 "vfu_virtio_delete_endpoint", 00:06:06.330 "keyring_file_remove_key", 00:06:06.330 "keyring_file_add_key", 00:06:06.330 "keyring_linux_set_options", 00:06:06.330 "iscsi_get_histogram", 00:06:06.330 "iscsi_enable_histogram", 00:06:06.330 "iscsi_set_options", 00:06:06.330 "iscsi_get_auth_groups", 00:06:06.330 "iscsi_auth_group_remove_secret", 00:06:06.330 "iscsi_auth_group_add_secret", 00:06:06.330 "iscsi_delete_auth_group", 00:06:06.330 "iscsi_create_auth_group", 00:06:06.330 "iscsi_set_discovery_auth", 00:06:06.330 "iscsi_get_options", 00:06:06.330 "iscsi_target_node_request_logout", 00:06:06.330 "iscsi_target_node_set_redirect", 00:06:06.330 "iscsi_target_node_set_auth", 00:06:06.330 "iscsi_target_node_add_lun", 00:06:06.330 "iscsi_get_stats", 00:06:06.330 "iscsi_get_connections", 00:06:06.330 "iscsi_portal_group_set_auth", 00:06:06.330 "iscsi_start_portal_group", 00:06:06.330 "iscsi_delete_portal_group", 00:06:06.330 "iscsi_create_portal_group", 00:06:06.330 "iscsi_get_portal_groups", 00:06:06.330 "iscsi_delete_target_node", 00:06:06.330 "iscsi_target_node_remove_pg_ig_maps", 00:06:06.330 "iscsi_target_node_add_pg_ig_maps", 00:06:06.330 "iscsi_create_target_node", 00:06:06.330 "iscsi_get_target_nodes", 00:06:06.330 "iscsi_delete_initiator_group", 00:06:06.330 "iscsi_initiator_group_remove_initiators", 00:06:06.331 "iscsi_initiator_group_add_initiators", 00:06:06.331 "iscsi_create_initiator_group", 00:06:06.331 "iscsi_get_initiator_groups", 00:06:06.331 "nvmf_set_crdt", 00:06:06.331 "nvmf_set_config", 00:06:06.331 "nvmf_set_max_subsystems", 00:06:06.331 "nvmf_stop_mdns_prr", 00:06:06.331 "nvmf_publish_mdns_prr", 00:06:06.331 "nvmf_subsystem_get_listeners", 00:06:06.331 "nvmf_subsystem_get_qpairs", 00:06:06.331 "nvmf_subsystem_get_controllers", 00:06:06.331 "nvmf_get_stats", 00:06:06.331 "nvmf_get_transports", 00:06:06.331 "nvmf_create_transport", 00:06:06.331 "nvmf_get_targets", 00:06:06.331 "nvmf_delete_target", 00:06:06.331 "nvmf_create_target", 00:06:06.331 "nvmf_subsystem_allow_any_host", 00:06:06.331 "nvmf_subsystem_remove_host", 00:06:06.331 "nvmf_subsystem_add_host", 00:06:06.331 "nvmf_ns_remove_host", 00:06:06.331 "nvmf_ns_add_host", 00:06:06.331 "nvmf_subsystem_remove_ns", 00:06:06.331 "nvmf_subsystem_add_ns", 00:06:06.331 "nvmf_subsystem_listener_set_ana_state", 00:06:06.331 "nvmf_discovery_get_referrals", 00:06:06.331 "nvmf_discovery_remove_referral", 00:06:06.331 "nvmf_discovery_add_referral", 00:06:06.331 "nvmf_subsystem_remove_listener", 00:06:06.331 "nvmf_subsystem_add_listener", 00:06:06.331 "nvmf_delete_subsystem", 00:06:06.331 "nvmf_create_subsystem", 00:06:06.331 "nvmf_get_subsystems", 00:06:06.331 "env_dpdk_get_mem_stats", 00:06:06.331 "nbd_get_disks", 00:06:06.331 "nbd_stop_disk", 00:06:06.331 "nbd_start_disk", 00:06:06.331 "ublk_recover_disk", 00:06:06.331 "ublk_get_disks", 00:06:06.331 "ublk_stop_disk", 00:06:06.331 "ublk_start_disk", 00:06:06.331 "ublk_destroy_target", 00:06:06.331 "ublk_create_target", 00:06:06.331 "virtio_blk_create_transport", 00:06:06.331 "virtio_blk_get_transports", 00:06:06.331 "vhost_controller_set_coalescing", 00:06:06.331 "vhost_get_controllers", 00:06:06.331 "vhost_delete_controller", 00:06:06.331 "vhost_create_blk_controller", 00:06:06.331 "vhost_scsi_controller_remove_target", 00:06:06.331 "vhost_scsi_controller_add_target", 00:06:06.331 "vhost_start_scsi_controller", 00:06:06.331 "vhost_create_scsi_controller", 00:06:06.331 "thread_set_cpumask", 00:06:06.331 "framework_get_governor", 00:06:06.331 "framework_get_scheduler", 00:06:06.331 "framework_set_scheduler", 00:06:06.331 "framework_get_reactors", 00:06:06.331 "thread_get_io_channels", 00:06:06.331 "thread_get_pollers", 00:06:06.331 "thread_get_stats", 00:06:06.331 "framework_monitor_context_switch", 00:06:06.331 "spdk_kill_instance", 00:06:06.331 "log_enable_timestamps", 00:06:06.331 "log_get_flags", 00:06:06.331 "log_clear_flag", 00:06:06.331 "log_set_flag", 00:06:06.331 "log_get_level", 00:06:06.331 "log_set_level", 00:06:06.331 "log_get_print_level", 00:06:06.331 "log_set_print_level", 00:06:06.331 "framework_enable_cpumask_locks", 00:06:06.331 "framework_disable_cpumask_locks", 00:06:06.331 "framework_wait_init", 00:06:06.331 "framework_start_init", 00:06:06.331 "scsi_get_devices", 00:06:06.331 "bdev_get_histogram", 00:06:06.331 "bdev_enable_histogram", 00:06:06.331 "bdev_set_qos_limit", 00:06:06.331 "bdev_set_qd_sampling_period", 00:06:06.331 "bdev_get_bdevs", 00:06:06.331 "bdev_reset_iostat", 00:06:06.331 "bdev_get_iostat", 00:06:06.331 "bdev_examine", 00:06:06.331 "bdev_wait_for_examine", 00:06:06.331 "bdev_set_options", 00:06:06.331 "notify_get_notifications", 00:06:06.331 "notify_get_types", 00:06:06.331 "accel_get_stats", 00:06:06.331 "accel_set_options", 00:06:06.331 "accel_set_driver", 00:06:06.331 "accel_crypto_key_destroy", 00:06:06.331 "accel_crypto_keys_get", 00:06:06.331 "accel_crypto_key_create", 00:06:06.331 "accel_assign_opc", 00:06:06.331 "accel_get_module_info", 00:06:06.331 "accel_get_opc_assignments", 00:06:06.331 "vmd_rescan", 00:06:06.331 "vmd_remove_device", 00:06:06.331 "vmd_enable", 00:06:06.331 "sock_get_default_impl", 00:06:06.331 "sock_set_default_impl", 00:06:06.331 "sock_impl_set_options", 00:06:06.331 "sock_impl_get_options", 00:06:06.331 "iobuf_get_stats", 00:06:06.331 "iobuf_set_options", 00:06:06.331 "keyring_get_keys", 00:06:06.331 "framework_get_pci_devices", 00:06:06.331 "framework_get_config", 00:06:06.331 "framework_get_subsystems", 00:06:06.331 "vfu_tgt_set_base_path", 00:06:06.331 "trace_get_info", 00:06:06.331 "trace_get_tpoint_group_mask", 00:06:06.331 "trace_disable_tpoint_group", 00:06:06.331 "trace_enable_tpoint_group", 00:06:06.331 "trace_clear_tpoint_mask", 00:06:06.331 "trace_set_tpoint_mask", 00:06:06.331 "spdk_get_version", 00:06:06.331 "rpc_get_methods" 00:06:06.331 ] 00:06:06.331 11:55:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.331 11:55:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:06.331 11:55:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 949211 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 949211 ']' 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 949211 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949211 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949211' 00:06:06.331 killing process with pid 949211 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 949211 00:06:06.331 11:55:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 949211 00:06:06.900 00:06:06.900 real 0m0.992s 00:06:06.900 user 0m1.647s 00:06:06.900 sys 0m0.436s 00:06:06.900 11:55:56 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.900 11:55:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.900 ************************************ 00:06:06.900 END TEST spdkcli_tcp 00:06:06.900 ************************************ 00:06:06.900 11:55:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.900 11:55:56 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.900 11:55:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.900 11:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.900 11:55:56 -- common/autotest_common.sh@10 -- # set +x 00:06:06.900 ************************************ 00:06:06.900 START TEST dpdk_mem_utility 00:06:06.900 ************************************ 00:06:06.900 11:55:56 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.900 * Looking for test storage... 00:06:06.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:06.900 11:55:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:06.900 11:55:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=949505 00:06:06.900 11:55:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 949505 00:06:06.900 11:55:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.900 11:55:56 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 949505 ']' 00:06:06.900 11:55:56 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.900 11:55:56 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.900 11:55:56 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.900 11:55:56 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.900 11:55:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.900 [2024-07-15 11:55:56.801126] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:06.900 [2024-07-15 11:55:56.801178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949505 ] 00:06:06.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.900 [2024-07-15 11:55:56.869617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.159 [2024-07-15 11:55:56.910360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.726 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.726 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:07.726 11:55:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:07.726 11:55:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:07.726 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.726 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:07.726 { 00:06:07.726 "filename": "/tmp/spdk_mem_dump.txt" 00:06:07.726 } 00:06:07.726 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.726 11:55:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:07.726 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:07.726 1 heaps totaling size 814.000000 MiB 00:06:07.726 size: 814.000000 MiB heap id: 0 00:06:07.726 end heaps---------- 00:06:07.726 8 mempools totaling size 598.116089 MiB 00:06:07.726 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:07.726 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:07.726 size: 84.521057 MiB name: bdev_io_949505 00:06:07.726 size: 51.011292 MiB name: evtpool_949505 00:06:07.726 size: 50.003479 MiB name: msgpool_949505 00:06:07.726 size: 21.763794 MiB name: PDU_Pool 00:06:07.726 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:07.726 size: 0.026123 MiB name: Session_Pool 00:06:07.726 end mempools------- 00:06:07.726 6 memzones totaling size 4.142822 MiB 00:06:07.726 size: 1.000366 MiB name: RG_ring_0_949505 00:06:07.726 size: 1.000366 MiB name: RG_ring_1_949505 00:06:07.726 size: 1.000366 MiB name: RG_ring_4_949505 00:06:07.726 size: 1.000366 MiB name: RG_ring_5_949505 00:06:07.726 size: 0.125366 MiB name: RG_ring_2_949505 00:06:07.726 size: 0.015991 MiB name: RG_ring_3_949505 00:06:07.726 end memzones------- 00:06:07.726 11:55:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:07.726 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:07.726 list of free elements. size: 12.519348 MiB 00:06:07.726 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:07.726 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:07.726 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:07.726 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:07.726 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:07.726 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:07.726 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:07.726 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:07.726 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:07.726 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:07.726 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:07.726 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:07.726 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:07.726 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:07.726 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:07.726 list of standard malloc elements. size: 199.218079 MiB 00:06:07.726 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:07.726 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:07.726 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:07.726 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:07.726 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:07.726 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:07.726 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:07.726 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:07.726 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:07.726 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:07.726 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:07.726 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:07.726 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:07.726 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:07.726 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:07.726 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:07.726 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:07.726 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:07.726 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:07.726 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:07.726 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:07.726 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:07.726 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:07.727 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:07.727 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:07.727 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:07.727 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:07.727 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:07.727 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:07.727 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:07.727 list of memzone associated elements. size: 602.262573 MiB 00:06:07.727 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:07.727 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:07.727 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:07.727 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:07.727 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:07.727 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_949505_0 00:06:07.727 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:07.727 associated memzone info: size: 48.002930 MiB name: MP_evtpool_949505_0 00:06:07.727 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:07.727 associated memzone info: size: 48.002930 MiB name: MP_msgpool_949505_0 00:06:07.727 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:07.727 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:07.727 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:07.727 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:07.727 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:07.727 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_949505 00:06:07.727 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:07.727 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_949505 00:06:07.727 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:07.727 associated memzone info: size: 1.007996 MiB name: MP_evtpool_949505 00:06:07.727 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:07.727 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:07.727 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:07.727 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:07.727 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:07.727 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:07.727 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:07.727 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:07.727 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:07.727 associated memzone info: size: 1.000366 MiB name: RG_ring_0_949505 00:06:07.727 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:07.727 associated memzone info: size: 1.000366 MiB name: RG_ring_1_949505 00:06:07.727 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:07.727 associated memzone info: size: 1.000366 MiB name: RG_ring_4_949505 00:06:07.727 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:07.727 associated memzone info: size: 1.000366 MiB name: RG_ring_5_949505 00:06:07.727 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:07.727 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_949505 00:06:07.727 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:07.727 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:07.727 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:07.727 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:07.727 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:07.727 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:07.727 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:07.727 associated memzone info: size: 0.125366 MiB name: RG_ring_2_949505 00:06:07.727 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:07.727 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:07.727 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:07.727 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:07.727 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:07.727 associated memzone info: size: 0.015991 MiB name: RG_ring_3_949505 00:06:07.727 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:07.727 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:07.727 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:07.727 associated memzone info: size: 0.000183 MiB name: MP_msgpool_949505 00:06:07.727 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:07.727 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_949505 00:06:07.727 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:07.727 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:07.727 11:55:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:07.727 11:55:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 949505 00:06:07.727 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 949505 ']' 00:06:07.727 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 949505 00:06:07.727 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:07.986 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.986 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949505 00:06:07.986 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.986 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.986 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949505' 00:06:07.986 killing process with pid 949505 00:06:07.986 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 949505 00:06:07.986 11:55:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 949505 00:06:08.244 00:06:08.244 real 0m1.407s 00:06:08.244 user 0m1.488s 00:06:08.244 sys 0m0.410s 00:06:08.244 11:55:58 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.244 11:55:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.244 ************************************ 00:06:08.244 END TEST dpdk_mem_utility 00:06:08.244 ************************************ 00:06:08.244 11:55:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.244 11:55:58 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:08.244 11:55:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.244 11:55:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.245 11:55:58 -- common/autotest_common.sh@10 -- # set +x 00:06:08.245 ************************************ 00:06:08.245 START TEST event 00:06:08.245 ************************************ 00:06:08.245 11:55:58 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:08.245 * Looking for test storage... 00:06:08.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:08.245 11:55:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:08.245 11:55:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:08.245 11:55:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:08.245 11:55:58 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:08.245 11:55:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.245 11:55:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.503 ************************************ 00:06:08.503 START TEST event_perf 00:06:08.503 ************************************ 00:06:08.503 11:55:58 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:08.503 Running I/O for 1 seconds...[2024-07-15 11:55:58.277219] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:08.503 [2024-07-15 11:55:58.277283] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949799 ] 00:06:08.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.503 [2024-07-15 11:55:58.350703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.503 [2024-07-15 11:55:58.393206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.503 [2024-07-15 11:55:58.393344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.503 [2024-07-15 11:55:58.393346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.503 [2024-07-15 11:55:58.393318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.878 Running I/O for 1 seconds... 00:06:09.878 lcore 0: 207934 00:06:09.878 lcore 1: 207934 00:06:09.878 lcore 2: 207934 00:06:09.878 lcore 3: 207934 00:06:09.878 done. 00:06:09.878 00:06:09.878 real 0m1.196s 00:06:09.878 user 0m4.103s 00:06:09.878 sys 0m0.087s 00:06:09.878 11:55:59 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.878 11:55:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.878 ************************************ 00:06:09.878 END TEST event_perf 00:06:09.878 ************************************ 00:06:09.878 11:55:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.879 11:55:59 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:09.879 11:55:59 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.879 11:55:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.879 11:55:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.879 ************************************ 00:06:09.879 START TEST event_reactor 00:06:09.879 ************************************ 00:06:09.879 11:55:59 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:09.879 [2024-07-15 11:55:59.546680] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:09.879 [2024-07-15 11:55:59.546753] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950050 ] 00:06:09.879 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.879 [2024-07-15 11:55:59.616238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.879 [2024-07-15 11:55:59.657361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.815 test_start 00:06:10.815 oneshot 00:06:10.815 tick 100 00:06:10.815 tick 100 00:06:10.815 tick 250 00:06:10.815 tick 100 00:06:10.815 tick 100 00:06:10.815 tick 100 00:06:10.815 tick 250 00:06:10.815 tick 500 00:06:10.815 tick 100 00:06:10.815 tick 100 00:06:10.815 tick 250 00:06:10.815 tick 100 00:06:10.815 tick 100 00:06:10.815 test_end 00:06:10.815 00:06:10.815 real 0m1.194s 00:06:10.815 user 0m1.109s 00:06:10.815 sys 0m0.081s 00:06:10.815 11:56:00 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.815 11:56:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:10.815 ************************************ 00:06:10.815 END TEST event_reactor 00:06:10.815 ************************************ 00:06:10.815 11:56:00 event -- common/autotest_common.sh@1142 -- # return 0 00:06:10.815 11:56:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.815 11:56:00 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:10.815 11:56:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.815 11:56:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.815 ************************************ 00:06:10.815 START TEST event_reactor_perf 00:06:10.815 ************************************ 00:06:10.815 11:56:00 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:10.815 [2024-07-15 11:56:00.807474] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:10.815 [2024-07-15 11:56:00.807544] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950298 ] 00:06:11.073 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.073 [2024-07-15 11:56:00.877300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.073 [2024-07-15 11:56:00.917323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.005 test_start 00:06:12.005 test_end 00:06:12.005 Performance: 503361 events per second 00:06:12.005 00:06:12.005 real 0m1.189s 00:06:12.005 user 0m1.099s 00:06:12.005 sys 0m0.086s 00:06:12.005 11:56:01 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.005 11:56:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.005 ************************************ 00:06:12.005 END TEST event_reactor_perf 00:06:12.005 ************************************ 00:06:12.263 11:56:02 event -- common/autotest_common.sh@1142 -- # return 0 00:06:12.263 11:56:02 event -- event/event.sh@49 -- # uname -s 00:06:12.263 11:56:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:12.263 11:56:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.263 11:56:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.263 11:56:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.263 11:56:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.263 ************************************ 00:06:12.263 START TEST event_scheduler 00:06:12.263 ************************************ 00:06:12.263 11:56:02 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.263 * Looking for test storage... 00:06:12.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:12.263 11:56:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:12.263 11:56:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=950570 00:06:12.263 11:56:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.263 11:56:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:12.263 11:56:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 950570 00:06:12.263 11:56:02 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 950570 ']' 00:06:12.263 11:56:02 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.263 11:56:02 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.263 11:56:02 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.263 11:56:02 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.263 11:56:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.263 [2024-07-15 11:56:02.184626] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:12.263 [2024-07-15 11:56:02.184669] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950570 ] 00:06:12.263 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.263 [2024-07-15 11:56:02.252965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.521 [2024-07-15 11:56:02.296891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.521 [2024-07-15 11:56:02.296999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.521 [2024-07-15 11:56:02.297106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.521 [2024-07-15 11:56:02.297107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:12.521 11:56:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.521 [2024-07-15 11:56:02.333698] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:12.521 [2024-07-15 11:56:02.333715] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:12.521 [2024-07-15 11:56:02.333723] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.521 [2024-07-15 11:56:02.333729] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.521 [2024-07-15 11:56:02.333734] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.521 11:56:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.521 [2024-07-15 11:56:02.399323] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.521 11:56:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.521 11:56:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.521 ************************************ 00:06:12.521 START TEST scheduler_create_thread 00:06:12.521 ************************************ 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.521 2 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.521 3 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:12.521 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.522 4 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.522 5 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.522 6 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.522 7 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.522 8 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.522 9 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.522 10 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.522 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.780 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.780 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:12.780 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:12.780 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.780 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.780 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.780 11:56:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:12.780 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.780 11:56:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.152 11:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.152 11:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:14.152 11:56:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:14.152 11:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.152 11:56:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.086 11:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.086 00:06:15.086 real 0m2.618s 00:06:15.086 user 0m0.023s 00:06:15.086 sys 0m0.005s 00:06:15.086 11:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.086 11:56:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.086 ************************************ 00:06:15.086 END TEST scheduler_create_thread 00:06:15.086 ************************************ 00:06:15.086 11:56:05 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:15.086 11:56:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:15.086 11:56:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 950570 00:06:15.086 11:56:05 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 950570 ']' 00:06:15.086 11:56:05 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 950570 00:06:15.086 11:56:05 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:15.343 11:56:05 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.343 11:56:05 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950570 00:06:15.343 11:56:05 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:15.343 11:56:05 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:15.343 11:56:05 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950570' 00:06:15.343 killing process with pid 950570 00:06:15.343 11:56:05 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 950570 00:06:15.343 11:56:05 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 950570 00:06:15.601 [2024-07-15 11:56:05.529299] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:15.860 00:06:15.860 real 0m3.661s 00:06:15.860 user 0m5.477s 00:06:15.860 sys 0m0.348s 00:06:15.860 11:56:05 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.860 11:56:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.860 ************************************ 00:06:15.860 END TEST event_scheduler 00:06:15.860 ************************************ 00:06:15.860 11:56:05 event -- common/autotest_common.sh@1142 -- # return 0 00:06:15.860 11:56:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:15.860 11:56:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:15.860 11:56:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.860 11:56:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.860 11:56:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.860 ************************************ 00:06:15.860 START TEST app_repeat 00:06:15.860 ************************************ 00:06:15.860 11:56:05 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=951126 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:15.860 11:56:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 951126' 00:06:15.861 Process app_repeat pid: 951126 00:06:15.861 11:56:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.861 11:56:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:15.861 spdk_app_start Round 0 00:06:15.861 11:56:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 951126 /var/tmp/spdk-nbd.sock 00:06:15.861 11:56:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 951126 ']' 00:06:15.861 11:56:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.861 11:56:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.861 11:56:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.861 11:56:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.861 11:56:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.861 [2024-07-15 11:56:05.824155] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:15.861 [2024-07-15 11:56:05.824206] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951126 ] 00:06:15.861 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.119 [2024-07-15 11:56:05.892893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.119 [2024-07-15 11:56:05.933334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.119 [2024-07-15 11:56:05.933336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.119 11:56:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.119 11:56:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:16.119 11:56:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.377 Malloc0 00:06:16.377 11:56:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.635 Malloc1 00:06:16.635 11:56:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.635 /dev/nbd0 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.635 1+0 records in 00:06:16.635 1+0 records out 00:06:16.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185609 s, 22.1 MB/s 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.635 11:56:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.635 11:56:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.893 /dev/nbd1 00:06:16.893 11:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.893 11:56:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.893 1+0 records in 00:06:16.893 1+0 records out 00:06:16.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026004 s, 15.8 MB/s 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:16.893 11:56:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:16.893 11:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.893 11:56:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.893 11:56:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.893 11:56:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.893 11:56:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.151 { 00:06:17.151 "nbd_device": "/dev/nbd0", 00:06:17.151 "bdev_name": "Malloc0" 00:06:17.151 }, 00:06:17.151 { 00:06:17.151 "nbd_device": "/dev/nbd1", 00:06:17.151 "bdev_name": "Malloc1" 00:06:17.151 } 00:06:17.151 ]' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.151 { 00:06:17.151 "nbd_device": "/dev/nbd0", 00:06:17.151 "bdev_name": "Malloc0" 00:06:17.151 }, 00:06:17.151 { 00:06:17.151 "nbd_device": "/dev/nbd1", 00:06:17.151 "bdev_name": "Malloc1" 00:06:17.151 } 00:06:17.151 ]' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.151 /dev/nbd1' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.151 /dev/nbd1' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.151 256+0 records in 00:06:17.151 256+0 records out 00:06:17.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01038 s, 101 MB/s 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.151 256+0 records in 00:06:17.151 256+0 records out 00:06:17.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137177 s, 76.4 MB/s 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.151 256+0 records in 00:06:17.151 256+0 records out 00:06:17.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145391 s, 72.1 MB/s 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.151 11:56:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.409 11:56:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.667 11:56:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.925 11:56:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.925 11:56:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.185 11:56:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.185 [2024-07-15 11:56:08.143264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.185 [2024-07-15 11:56:08.179277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.185 [2024-07-15 11:56:08.179277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.444 [2024-07-15 11:56:08.220357] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.444 [2024-07-15 11:56:08.220399] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.006 11:56:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.006 11:56:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:21.006 spdk_app_start Round 1 00:06:21.006 11:56:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 951126 /var/tmp/spdk-nbd.sock 00:06:21.006 11:56:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 951126 ']' 00:06:21.006 11:56:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.006 11:56:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.006 11:56:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.006 11:56:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.006 11:56:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.264 11:56:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.264 11:56:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:21.264 11:56:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.520 Malloc0 00:06:21.520 11:56:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.778 Malloc1 00:06:21.778 11:56:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.778 /dev/nbd0 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.778 1+0 records in 00:06:21.778 1+0 records out 00:06:21.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228349 s, 17.9 MB/s 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:21.778 11:56:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.778 11:56:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.036 /dev/nbd1 00:06:22.036 11:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.036 11:56:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.036 1+0 records in 00:06:22.036 1+0 records out 00:06:22.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020045 s, 20.4 MB/s 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:22.036 11:56:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:22.036 11:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.036 11:56:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.036 11:56:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.036 11:56:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.036 11:56:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.295 { 00:06:22.295 "nbd_device": "/dev/nbd0", 00:06:22.295 "bdev_name": "Malloc0" 00:06:22.295 }, 00:06:22.295 { 00:06:22.295 "nbd_device": "/dev/nbd1", 00:06:22.295 "bdev_name": "Malloc1" 00:06:22.295 } 00:06:22.295 ]' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.295 { 00:06:22.295 "nbd_device": "/dev/nbd0", 00:06:22.295 "bdev_name": "Malloc0" 00:06:22.295 }, 00:06:22.295 { 00:06:22.295 "nbd_device": "/dev/nbd1", 00:06:22.295 "bdev_name": "Malloc1" 00:06:22.295 } 00:06:22.295 ]' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.295 /dev/nbd1' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.295 /dev/nbd1' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.295 256+0 records in 00:06:22.295 256+0 records out 00:06:22.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103648 s, 101 MB/s 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.295 256+0 records in 00:06:22.295 256+0 records out 00:06:22.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137007 s, 76.5 MB/s 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.295 256+0 records in 00:06:22.295 256+0 records out 00:06:22.295 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151682 s, 69.1 MB/s 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.295 11:56:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.553 11:56:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.810 11:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.067 11:56:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.067 11:56:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.326 11:56:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.326 [2024-07-15 11:56:13.277410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.326 [2024-07-15 11:56:13.312838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.326 [2024-07-15 11:56:13.312840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.583 [2024-07-15 11:56:13.354402] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.583 [2024-07-15 11:56:13.354441] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.858 11:56:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.858 11:56:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:26.858 spdk_app_start Round 2 00:06:26.858 11:56:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 951126 /var/tmp/spdk-nbd.sock 00:06:26.858 11:56:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 951126 ']' 00:06:26.858 11:56:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.858 11:56:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.858 11:56:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.858 11:56:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.858 11:56:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.858 11:56:16 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.858 11:56:16 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:26.858 11:56:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.858 Malloc0 00:06:26.858 11:56:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.858 Malloc1 00:06:26.858 11:56:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.858 11:56:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.858 11:56:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.858 11:56:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.859 11:56:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.859 /dev/nbd0 00:06:27.116 11:56:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.116 11:56:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.116 11:56:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:27.116 11:56:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:27.116 11:56:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:27.116 11:56:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:27.116 11:56:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:27.117 11:56:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:27.117 11:56:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:27.117 11:56:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:27.117 11:56:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.117 1+0 records in 00:06:27.117 1+0 records out 00:06:27.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187501 s, 21.8 MB/s 00:06:27.117 11:56:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.117 11:56:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:27.117 11:56:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.117 11:56:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:27.117 11:56:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:27.117 11:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.117 11:56:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.117 11:56:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.117 /dev/nbd1 00:06:27.117 11:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.117 11:56:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.117 1+0 records in 00:06:27.117 1+0 records out 00:06:27.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229494 s, 17.8 MB/s 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:27.117 11:56:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:27.117 11:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.117 11:56:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.117 11:56:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.117 11:56:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.117 11:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.375 { 00:06:27.375 "nbd_device": "/dev/nbd0", 00:06:27.375 "bdev_name": "Malloc0" 00:06:27.375 }, 00:06:27.375 { 00:06:27.375 "nbd_device": "/dev/nbd1", 00:06:27.375 "bdev_name": "Malloc1" 00:06:27.375 } 00:06:27.375 ]' 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.375 { 00:06:27.375 "nbd_device": "/dev/nbd0", 00:06:27.375 "bdev_name": "Malloc0" 00:06:27.375 }, 00:06:27.375 { 00:06:27.375 "nbd_device": "/dev/nbd1", 00:06:27.375 "bdev_name": "Malloc1" 00:06:27.375 } 00:06:27.375 ]' 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.375 /dev/nbd1' 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.375 /dev/nbd1' 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.375 256+0 records in 00:06:27.375 256+0 records out 00:06:27.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395491 s, 265 MB/s 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.375 256+0 records in 00:06:27.375 256+0 records out 00:06:27.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137396 s, 76.3 MB/s 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.375 11:56:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.375 256+0 records in 00:06:27.375 256+0 records out 00:06:27.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148723 s, 70.5 MB/s 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.634 11:56:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.892 11:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.149 11:56:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.149 11:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.149 11:56:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.149 11:56:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.149 11:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.149 11:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.149 11:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.149 11:56:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.149 11:56:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.149 11:56:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.149 11:56:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.149 11:56:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.149 11:56:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.408 11:56:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.408 [2024-07-15 11:56:18.388416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.667 [2024-07-15 11:56:18.425478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.667 [2024-07-15 11:56:18.425478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.667 [2024-07-15 11:56:18.466472] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.667 [2024-07-15 11:56:18.466516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.938 11:56:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 951126 /var/tmp/spdk-nbd.sock 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 951126 ']' 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:31.938 11:56:21 event.app_repeat -- event/event.sh@39 -- # killprocess 951126 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 951126 ']' 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 951126 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 951126 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 951126' 00:06:31.938 killing process with pid 951126 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@967 -- # kill 951126 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@972 -- # wait 951126 00:06:31.938 spdk_app_start is called in Round 0. 00:06:31.938 Shutdown signal received, stop current app iteration 00:06:31.938 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 reinitialization... 00:06:31.938 spdk_app_start is called in Round 1. 00:06:31.938 Shutdown signal received, stop current app iteration 00:06:31.938 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 reinitialization... 00:06:31.938 spdk_app_start is called in Round 2. 00:06:31.938 Shutdown signal received, stop current app iteration 00:06:31.938 Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 reinitialization... 00:06:31.938 spdk_app_start is called in Round 3. 00:06:31.938 Shutdown signal received, stop current app iteration 00:06:31.938 11:56:21 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:31.938 11:56:21 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:31.938 00:06:31.938 real 0m15.825s 00:06:31.938 user 0m34.566s 00:06:31.938 sys 0m2.380s 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.938 11:56:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.938 ************************************ 00:06:31.938 END TEST app_repeat 00:06:31.938 ************************************ 00:06:31.938 11:56:21 event -- common/autotest_common.sh@1142 -- # return 0 00:06:31.938 11:56:21 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:31.938 11:56:21 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:31.938 11:56:21 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.938 11:56:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.938 11:56:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.938 ************************************ 00:06:31.938 START TEST cpu_locks 00:06:31.938 ************************************ 00:06:31.938 11:56:21 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:31.938 * Looking for test storage... 00:06:31.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:31.938 11:56:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:31.938 11:56:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:31.938 11:56:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:31.938 11:56:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:31.938 11:56:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.938 11:56:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.938 11:56:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.938 ************************************ 00:06:31.938 START TEST default_locks 00:06:31.938 ************************************ 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=954083 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 954083 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 954083 ']' 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.938 11:56:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.938 [2024-07-15 11:56:21.859420] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:31.938 [2024-07-15 11:56:21.859466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954083 ] 00:06:31.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.938 [2024-07-15 11:56:21.926216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.196 [2024-07-15 11:56:21.966342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.762 11:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.762 11:56:22 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:32.762 11:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 954083 00:06:32.762 11:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 954083 00:06:32.762 11:56:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.330 lslocks: write error 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 954083 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 954083 ']' 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 954083 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 954083 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 954083' 00:06:33.330 killing process with pid 954083 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 954083 00:06:33.330 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 954083 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 954083 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 954083 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 954083 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 954083 ']' 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (954083) - No such process 00:06:33.589 ERROR: process (pid: 954083) is no longer running 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:33.589 00:06:33.589 real 0m1.663s 00:06:33.589 user 0m1.736s 00:06:33.589 sys 0m0.580s 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.589 11:56:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.589 ************************************ 00:06:33.589 END TEST default_locks 00:06:33.589 ************************************ 00:06:33.589 11:56:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:33.589 11:56:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:33.589 11:56:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.589 11:56:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.589 11:56:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.589 ************************************ 00:06:33.589 START TEST default_locks_via_rpc 00:06:33.589 ************************************ 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=954342 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 954342 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 954342 ']' 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.589 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.589 [2024-07-15 11:56:23.591134] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:33.589 [2024-07-15 11:56:23.591176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954342 ] 00:06:33.847 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.847 [2024-07-15 11:56:23.656948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.847 [2024-07-15 11:56:23.694130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 954342 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 954342 00:06:34.105 11:56:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 954342 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 954342 ']' 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 954342 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 954342 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 954342' 00:06:34.363 killing process with pid 954342 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 954342 00:06:34.363 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 954342 00:06:34.942 00:06:34.942 real 0m1.094s 00:06:34.942 user 0m1.050s 00:06:34.942 sys 0m0.493s 00:06:34.942 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.942 11:56:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.942 ************************************ 00:06:34.942 END TEST default_locks_via_rpc 00:06:34.942 ************************************ 00:06:34.942 11:56:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:34.942 11:56:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:34.942 11:56:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.942 11:56:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.942 11:56:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.942 ************************************ 00:06:34.942 START TEST non_locking_app_on_locked_coremask 00:06:34.942 ************************************ 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=954598 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 954598 /var/tmp/spdk.sock 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 954598 ']' 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.942 11:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.942 [2024-07-15 11:56:24.754617] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:34.942 [2024-07-15 11:56:24.754663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954598 ] 00:06:34.942 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.942 [2024-07-15 11:56:24.822764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.942 [2024-07-15 11:56:24.860145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=954827 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 954827 /var/tmp/spdk2.sock 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 954827 ']' 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.877 11:56:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.877 [2024-07-15 11:56:25.604784] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:35.877 [2024-07-15 11:56:25.604832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid954827 ] 00:06:35.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.877 [2024-07-15 11:56:25.681128] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.877 [2024-07-15 11:56:25.681158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.877 [2024-07-15 11:56:25.760949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.443 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.443 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:36.443 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 954598 00:06:36.443 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 954598 00:06:36.443 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.701 lslocks: write error 00:06:36.701 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 954598 00:06:36.701 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 954598 ']' 00:06:36.701 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 954598 00:06:36.701 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:36.701 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.701 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 954598 00:06:36.701 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.701 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.701 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 954598' 00:06:36.702 killing process with pid 954598 00:06:36.702 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 954598 00:06:36.702 11:56:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 954598 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 954827 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 954827 ']' 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 954827 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 954827 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 954827' 00:06:37.637 killing process with pid 954827 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 954827 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 954827 00:06:37.637 00:06:37.637 real 0m2.929s 00:06:37.637 user 0m3.147s 00:06:37.637 sys 0m0.815s 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.637 11:56:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.637 ************************************ 00:06:37.637 END TEST non_locking_app_on_locked_coremask 00:06:37.637 ************************************ 00:06:37.896 11:56:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:37.896 11:56:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:37.896 11:56:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.896 11:56:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.896 11:56:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.896 ************************************ 00:06:37.896 START TEST locking_app_on_unlocked_coremask 00:06:37.896 ************************************ 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=955100 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 955100 /var/tmp/spdk.sock 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 955100 ']' 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.896 11:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.896 [2024-07-15 11:56:27.750592] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:37.896 [2024-07-15 11:56:27.750634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955100 ] 00:06:37.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.896 [2024-07-15 11:56:27.817297] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.896 [2024-07-15 11:56:27.817321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.896 [2024-07-15 11:56:27.855403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.155 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.155 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:38.156 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=955246 00:06:38.156 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 955246 /var/tmp/spdk2.sock 00:06:38.156 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.156 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 955246 ']' 00:06:38.156 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.156 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.156 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.156 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.156 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.156 [2024-07-15 11:56:28.095349] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:38.156 [2024-07-15 11:56:28.095398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955246 ] 00:06:38.156 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.415 [2024-07-15 11:56:28.172946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.415 [2024-07-15 11:56:28.252935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.981 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.981 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:38.981 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 955246 00:06:38.981 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 955246 00:06:38.981 11:56:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.548 lslocks: write error 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 955100 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 955100 ']' 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 955100 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 955100 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 955100' 00:06:39.548 killing process with pid 955100 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 955100 00:06:39.548 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 955100 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 955246 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 955246 ']' 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 955246 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 955246 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 955246' 00:06:40.115 killing process with pid 955246 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 955246 00:06:40.115 11:56:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 955246 00:06:40.431 00:06:40.431 real 0m2.606s 00:06:40.431 user 0m2.712s 00:06:40.431 sys 0m0.849s 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.431 ************************************ 00:06:40.431 END TEST locking_app_on_unlocked_coremask 00:06:40.431 ************************************ 00:06:40.431 11:56:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:40.431 11:56:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.431 11:56:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.431 11:56:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.431 11:56:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.431 ************************************ 00:06:40.431 START TEST locking_app_on_locked_coremask 00:06:40.431 ************************************ 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=955601 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 955601 /var/tmp/spdk.sock 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 955601 ']' 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.431 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.691 [2024-07-15 11:56:30.423085] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:40.691 [2024-07-15 11:56:30.423125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955601 ] 00:06:40.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.691 [2024-07-15 11:56:30.491305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.691 [2024-07-15 11:56:30.531809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=955703 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 955703 /var/tmp/spdk2.sock 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 955703 /var/tmp/spdk2.sock 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 955703 /var/tmp/spdk2.sock 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 955703 ']' 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.949 11:56:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.949 [2024-07-15 11:56:30.772557] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:40.949 [2024-07-15 11:56:30.772605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955703 ] 00:06:40.949 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.949 [2024-07-15 11:56:30.850348] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 955601 has claimed it. 00:06:40.949 [2024-07-15 11:56:30.850385] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (955703) - No such process 00:06:41.517 ERROR: process (pid: 955703) is no longer running 00:06:41.517 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.517 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:41.517 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:41.517 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:41.517 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:41.517 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:41.517 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 955601 00:06:41.517 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 955601 00:06:41.517 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.085 lslocks: write error 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 955601 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 955601 ']' 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 955601 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 955601 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 955601' 00:06:42.085 killing process with pid 955601 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 955601 00:06:42.085 11:56:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 955601 00:06:42.344 00:06:42.344 real 0m1.820s 00:06:42.344 user 0m1.917s 00:06:42.344 sys 0m0.605s 00:06:42.344 11:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.344 11:56:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.344 ************************************ 00:06:42.344 END TEST locking_app_on_locked_coremask 00:06:42.344 ************************************ 00:06:42.344 11:56:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:42.344 11:56:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:42.344 11:56:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.344 11:56:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.344 11:56:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.344 ************************************ 00:06:42.344 START TEST locking_overlapped_coremask 00:06:42.344 ************************************ 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=956051 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 956051 /var/tmp/spdk.sock 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 956051 ']' 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.344 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.344 [2024-07-15 11:56:32.309864] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:42.344 [2024-07-15 11:56:32.309907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956051 ] 00:06:42.344 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.603 [2024-07-15 11:56:32.375018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.603 [2024-07-15 11:56:32.417835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.603 [2024-07-15 11:56:32.417940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.603 [2024-07-15 11:56:32.417940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.603 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=956088 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 956088 /var/tmp/spdk2.sock 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 956088 /var/tmp/spdk2.sock 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 956088 /var/tmp/spdk2.sock 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 956088 ']' 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.863 11:56:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.863 [2024-07-15 11:56:32.658220] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:42.863 [2024-07-15 11:56:32.658274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956088 ] 00:06:42.863 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.863 [2024-07-15 11:56:32.733927] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 956051 has claimed it. 00:06:42.863 [2024-07-15 11:56:32.733964] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (956088) - No such process 00:06:43.431 ERROR: process (pid: 956088) is no longer running 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 956051 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 956051 ']' 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 956051 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 956051 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 956051' 00:06:43.431 killing process with pid 956051 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 956051 00:06:43.431 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 956051 00:06:43.690 00:06:43.690 real 0m1.373s 00:06:43.690 user 0m3.688s 00:06:43.690 sys 0m0.396s 00:06:43.690 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.690 11:56:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.690 ************************************ 00:06:43.690 END TEST locking_overlapped_coremask 00:06:43.690 ************************************ 00:06:43.690 11:56:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:43.690 11:56:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:43.690 11:56:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.690 11:56:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.690 11:56:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.949 ************************************ 00:06:43.949 START TEST locking_overlapped_coremask_via_rpc 00:06:43.949 ************************************ 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=956348 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 956348 /var/tmp/spdk.sock 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 956348 ']' 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.949 11:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.949 [2024-07-15 11:56:33.751471] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:43.949 [2024-07-15 11:56:33.751513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956348 ] 00:06:43.949 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.949 [2024-07-15 11:56:33.804634] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.949 [2024-07-15 11:56:33.804658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.949 [2024-07-15 11:56:33.851244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.949 [2024-07-15 11:56:33.851277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.949 [2024-07-15 11:56:33.851277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=956364 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 956364 /var/tmp/spdk2.sock 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 956364 ']' 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.208 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.208 [2024-07-15 11:56:34.088120] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:44.208 [2024-07-15 11:56:34.088162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956364 ] 00:06:44.208 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.208 [2024-07-15 11:56:34.163637] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.208 [2024-07-15 11:56:34.163668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.468 [2024-07-15 11:56:34.244980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.468 [2024-07-15 11:56:34.248268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.468 [2024-07-15 11:56:34.248269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.037 [2024-07-15 11:56:34.912297] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 956348 has claimed it. 00:06:45.037 request: 00:06:45.037 { 00:06:45.037 "method": "framework_enable_cpumask_locks", 00:06:45.037 "req_id": 1 00:06:45.037 } 00:06:45.037 Got JSON-RPC error response 00:06:45.037 response: 00:06:45.037 { 00:06:45.037 "code": -32603, 00:06:45.037 "message": "Failed to claim CPU core: 2" 00:06:45.037 } 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 956348 /var/tmp/spdk.sock 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 956348 ']' 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.037 11:56:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 956364 /var/tmp/spdk2.sock 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 956364 ']' 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.296 00:06:45.296 real 0m1.589s 00:06:45.296 user 0m0.781s 00:06:45.296 sys 0m0.131s 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.296 11:56:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.296 ************************************ 00:06:45.296 END TEST locking_overlapped_coremask_via_rpc 00:06:45.296 ************************************ 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:45.556 11:56:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:45.556 11:56:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 956348 ]] 00:06:45.556 11:56:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 956348 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 956348 ']' 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 956348 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 956348 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 956348' 00:06:45.556 killing process with pid 956348 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 956348 00:06:45.556 11:56:35 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 956348 00:06:45.815 11:56:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 956364 ]] 00:06:45.815 11:56:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 956364 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 956364 ']' 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 956364 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 956364 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 956364' 00:06:45.815 killing process with pid 956364 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 956364 00:06:45.815 11:56:35 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 956364 00:06:46.075 11:56:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.075 11:56:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:46.075 11:56:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 956348 ]] 00:06:46.075 11:56:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 956348 00:06:46.075 11:56:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 956348 ']' 00:06:46.075 11:56:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 956348 00:06:46.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (956348) - No such process 00:06:46.075 11:56:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 956348 is not found' 00:06:46.075 Process with pid 956348 is not found 00:06:46.075 11:56:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 956364 ]] 00:06:46.075 11:56:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 956364 00:06:46.075 11:56:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 956364 ']' 00:06:46.075 11:56:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 956364 00:06:46.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (956364) - No such process 00:06:46.075 11:56:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 956364 is not found' 00:06:46.075 Process with pid 956364 is not found 00:06:46.075 11:56:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.075 00:06:46.075 real 0m14.348s 00:06:46.075 user 0m24.127s 00:06:46.075 sys 0m4.780s 00:06:46.075 11:56:36 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.075 11:56:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.075 ************************************ 00:06:46.075 END TEST cpu_locks 00:06:46.075 ************************************ 00:06:46.075 11:56:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:46.075 00:06:46.075 real 0m37.933s 00:06:46.075 user 1m10.692s 00:06:46.075 sys 0m8.104s 00:06:46.075 11:56:36 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.075 11:56:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.075 ************************************ 00:06:46.075 END TEST event 00:06:46.075 ************************************ 00:06:46.335 11:56:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:46.335 11:56:36 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:46.335 11:56:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.335 11:56:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.335 11:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:46.335 ************************************ 00:06:46.335 START TEST thread 00:06:46.335 ************************************ 00:06:46.335 11:56:36 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:46.335 * Looking for test storage... 00:06:46.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:46.335 11:56:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.335 11:56:36 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:46.335 11:56:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.335 11:56:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.335 ************************************ 00:06:46.335 START TEST thread_poller_perf 00:06:46.335 ************************************ 00:06:46.335 11:56:36 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.335 [2024-07-15 11:56:36.280906] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:46.335 [2024-07-15 11:56:36.280978] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956786 ] 00:06:46.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.594 [2024-07-15 11:56:36.352145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.594 [2024-07-15 11:56:36.392546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.594 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:47.530 ====================================== 00:06:47.530 busy:2307361004 (cyc) 00:06:47.530 total_run_count: 409000 00:06:47.530 tsc_hz: 2300000000 (cyc) 00:06:47.530 ====================================== 00:06:47.530 poller_cost: 5641 (cyc), 2452 (nsec) 00:06:47.530 00:06:47.530 real 0m1.201s 00:06:47.530 user 0m1.111s 00:06:47.530 sys 0m0.085s 00:06:47.530 11:56:37 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.530 11:56:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.530 ************************************ 00:06:47.530 END TEST thread_poller_perf 00:06:47.530 ************************************ 00:06:47.530 11:56:37 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:47.530 11:56:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.530 11:56:37 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:47.530 11:56:37 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.530 11:56:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.530 ************************************ 00:06:47.530 START TEST thread_poller_perf 00:06:47.530 ************************************ 00:06:47.530 11:56:37 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.790 [2024-07-15 11:56:37.545135] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:47.790 [2024-07-15 11:56:37.545208] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956961 ] 00:06:47.790 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.790 [2024-07-15 11:56:37.614782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.790 [2024-07-15 11:56:37.654406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.790 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:48.727 ====================================== 00:06:48.727 busy:2301704486 (cyc) 00:06:48.727 total_run_count: 5472000 00:06:48.727 tsc_hz: 2300000000 (cyc) 00:06:48.727 ====================================== 00:06:48.727 poller_cost: 420 (cyc), 182 (nsec) 00:06:48.727 00:06:48.727 real 0m1.191s 00:06:48.727 user 0m1.104s 00:06:48.727 sys 0m0.083s 00:06:48.727 11:56:38 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.727 11:56:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.727 ************************************ 00:06:48.727 END TEST thread_poller_perf 00:06:48.727 ************************************ 00:06:48.986 11:56:38 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:48.986 11:56:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:48.986 00:06:48.986 real 0m2.612s 00:06:48.986 user 0m2.294s 00:06:48.986 sys 0m0.325s 00:06:48.986 11:56:38 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.986 11:56:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.986 ************************************ 00:06:48.986 END TEST thread 00:06:48.986 ************************************ 00:06:48.986 11:56:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.986 11:56:38 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:48.986 11:56:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.986 11:56:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.986 11:56:38 -- common/autotest_common.sh@10 -- # set +x 00:06:48.986 ************************************ 00:06:48.986 START TEST accel 00:06:48.986 ************************************ 00:06:48.986 11:56:38 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:48.986 * Looking for test storage... 00:06:48.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:48.986 11:56:38 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:48.986 11:56:38 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:48.986 11:56:38 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.986 11:56:38 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=957245 00:06:48.986 11:56:38 accel -- accel/accel.sh@63 -- # waitforlisten 957245 00:06:48.986 11:56:38 accel -- common/autotest_common.sh@829 -- # '[' -z 957245 ']' 00:06:48.986 11:56:38 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.986 11:56:38 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:48.986 11:56:38 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.986 11:56:38 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:48.986 11:56:38 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.986 11:56:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.986 11:56:38 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.986 11:56:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.986 11:56:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.986 11:56:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.986 11:56:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.986 11:56:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.986 11:56:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:48.986 11:56:38 accel -- accel/accel.sh@41 -- # jq -r . 00:06:48.986 [2024-07-15 11:56:38.963429] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:48.987 [2024-07-15 11:56:38.963479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957245 ] 00:06:48.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.245 [2024-07-15 11:56:39.032640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.245 [2024-07-15 11:56:39.073637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.838 11:56:39 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.838 11:56:39 accel -- common/autotest_common.sh@862 -- # return 0 00:06:49.838 11:56:39 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:49.838 11:56:39 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:49.838 11:56:39 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:49.838 11:56:39 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:49.838 11:56:39 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:49.838 11:56:39 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:49.838 11:56:39 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:49.838 11:56:39 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.838 11:56:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.838 11:56:39 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.838 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.838 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.838 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.838 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.838 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.838 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.838 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.838 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.839 11:56:39 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.839 11:56:39 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.839 11:56:39 accel -- accel/accel.sh@75 -- # killprocess 957245 00:06:49.839 11:56:39 accel -- common/autotest_common.sh@948 -- # '[' -z 957245 ']' 00:06:49.839 11:56:39 accel -- common/autotest_common.sh@952 -- # kill -0 957245 00:06:49.839 11:56:39 accel -- common/autotest_common.sh@953 -- # uname 00:06:49.839 11:56:39 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.839 11:56:39 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 957245 00:06:50.098 11:56:39 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.098 11:56:39 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.098 11:56:39 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 957245' 00:06:50.098 killing process with pid 957245 00:06:50.098 11:56:39 accel -- common/autotest_common.sh@967 -- # kill 957245 00:06:50.098 11:56:39 accel -- common/autotest_common.sh@972 -- # wait 957245 00:06:50.357 11:56:40 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:50.357 11:56:40 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:50.357 11:56:40 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:50.357 11:56:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.357 11:56:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.357 11:56:40 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:50.357 11:56:40 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:50.357 11:56:40 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:50.357 11:56:40 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.357 11:56:40 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.357 11:56:40 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.357 11:56:40 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.357 11:56:40 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.357 11:56:40 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:50.357 11:56:40 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:50.357 11:56:40 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.357 11:56:40 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:50.357 11:56:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.357 11:56:40 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:50.357 11:56:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:50.358 11:56:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.358 11:56:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.358 ************************************ 00:06:50.358 START TEST accel_missing_filename 00:06:50.358 ************************************ 00:06:50.358 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:50.358 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:50.358 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:50.358 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:50.358 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.358 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:50.358 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.358 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:50.358 11:56:40 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:50.358 11:56:40 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:50.358 11:56:40 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.358 11:56:40 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.358 11:56:40 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.358 11:56:40 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.358 11:56:40 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.358 11:56:40 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:50.358 11:56:40 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:50.358 [2024-07-15 11:56:40.321653] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:50.358 [2024-07-15 11:56:40.321724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957524 ] 00:06:50.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.616 [2024-07-15 11:56:40.392085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.616 [2024-07-15 11:56:40.433385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.616 [2024-07-15 11:56:40.474783] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.616 [2024-07-15 11:56:40.535231] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:50.616 A filename is required. 00:06:50.616 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:50.616 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.616 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:50.616 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:50.616 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:50.616 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.616 00:06:50.616 real 0m0.308s 00:06:50.616 user 0m0.219s 00:06:50.616 sys 0m0.125s 00:06:50.616 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.616 11:56:40 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:50.616 ************************************ 00:06:50.616 END TEST accel_missing_filename 00:06:50.616 ************************************ 00:06:50.875 11:56:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.875 11:56:40 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.875 11:56:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:50.875 11:56:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.875 11:56:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.875 ************************************ 00:06:50.875 START TEST accel_compress_verify 00:06:50.875 ************************************ 00:06:50.875 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.875 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:50.875 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.875 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:50.875 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.875 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:50.875 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.875 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.875 11:56:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.875 11:56:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:50.875 11:56:40 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.875 11:56:40 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.875 11:56:40 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.875 11:56:40 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.875 11:56:40 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.875 11:56:40 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:50.875 11:56:40 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:50.875 [2024-07-15 11:56:40.698242] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:50.875 [2024-07-15 11:56:40.698308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957744 ] 00:06:50.875 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.875 [2024-07-15 11:56:40.751532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.875 [2024-07-15 11:56:40.792639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.875 [2024-07-15 11:56:40.834041] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.135 [2024-07-15 11:56:40.894302] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:51.135 00:06:51.135 Compression does not support the verify option, aborting. 00:06:51.135 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:51.135 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.135 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:51.135 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:51.135 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:51.135 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.135 00:06:51.135 real 0m0.290s 00:06:51.135 user 0m0.211s 00:06:51.135 sys 0m0.118s 00:06:51.135 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.135 11:56:40 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:51.135 ************************************ 00:06:51.135 END TEST accel_compress_verify 00:06:51.135 ************************************ 00:06:51.135 11:56:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.135 11:56:40 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:51.135 11:56:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:51.135 11:56:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.135 11:56:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.135 ************************************ 00:06:51.135 START TEST accel_wrong_workload 00:06:51.135 ************************************ 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:51.135 11:56:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:51.135 11:56:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:51.135 11:56:41 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.135 11:56:41 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.135 11:56:41 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.135 11:56:41 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.135 11:56:41 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.135 11:56:41 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:51.135 11:56:41 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:51.135 Unsupported workload type: foobar 00:06:51.135 [2024-07-15 11:56:41.048710] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:51.135 accel_perf options: 00:06:51.135 [-h help message] 00:06:51.135 [-q queue depth per core] 00:06:51.135 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:51.135 [-T number of threads per core 00:06:51.135 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:51.135 [-t time in seconds] 00:06:51.135 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:51.135 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:51.135 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:51.135 [-l for compress/decompress workloads, name of uncompressed input file 00:06:51.135 [-S for crc32c workload, use this seed value (default 0) 00:06:51.135 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:51.135 [-f for fill workload, use this BYTE value (default 255) 00:06:51.135 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:51.135 [-y verify result if this switch is on] 00:06:51.135 [-a tasks to allocate per core (default: same value as -q)] 00:06:51.135 Can be used to spread operations across a wider range of memory. 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.135 00:06:51.135 real 0m0.029s 00:06:51.135 user 0m0.021s 00:06:51.135 sys 0m0.008s 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.135 11:56:41 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:51.135 ************************************ 00:06:51.135 END TEST accel_wrong_workload 00:06:51.135 ************************************ 00:06:51.135 Error: writing output failed: Broken pipe 00:06:51.135 11:56:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.135 11:56:41 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:51.135 11:56:41 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:51.135 11:56:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.135 11:56:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.135 ************************************ 00:06:51.135 START TEST accel_negative_buffers 00:06:51.135 ************************************ 00:06:51.135 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:51.135 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:51.135 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:51.135 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:51.135 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.135 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:51.135 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.135 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:51.135 11:56:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:51.135 11:56:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:51.135 11:56:41 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.136 11:56:41 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.136 11:56:41 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.136 11:56:41 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.136 11:56:41 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.136 11:56:41 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:51.136 11:56:41 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:51.395 -x option must be non-negative. 00:06:51.395 [2024-07-15 11:56:41.151597] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:51.395 accel_perf options: 00:06:51.395 [-h help message] 00:06:51.395 [-q queue depth per core] 00:06:51.395 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:51.395 [-T number of threads per core 00:06:51.395 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:51.395 [-t time in seconds] 00:06:51.395 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:51.395 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:51.395 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:51.395 [-l for compress/decompress workloads, name of uncompressed input file 00:06:51.395 [-S for crc32c workload, use this seed value (default 0) 00:06:51.395 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:51.395 [-f for fill workload, use this BYTE value (default 255) 00:06:51.395 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:51.395 [-y verify result if this switch is on] 00:06:51.395 [-a tasks to allocate per core (default: same value as -q)] 00:06:51.395 Can be used to spread operations across a wider range of memory. 00:06:51.395 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:51.395 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:51.395 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:51.395 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:51.395 00:06:51.395 real 0m0.035s 00:06:51.395 user 0m0.021s 00:06:51.395 sys 0m0.014s 00:06:51.395 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.395 11:56:41 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:51.395 ************************************ 00:06:51.395 END TEST accel_negative_buffers 00:06:51.395 ************************************ 00:06:51.395 Error: writing output failed: Broken pipe 00:06:51.395 11:56:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.395 11:56:41 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:51.395 11:56:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:51.395 11:56:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.395 11:56:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.395 ************************************ 00:06:51.395 START TEST accel_crc32c 00:06:51.395 ************************************ 00:06:51.395 11:56:41 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:51.395 11:56:41 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:51.395 [2024-07-15 11:56:41.251357] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:51.395 [2024-07-15 11:56:41.251411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957809 ] 00:06:51.395 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.395 [2024-07-15 11:56:41.319898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.395 [2024-07-15 11:56:41.365289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:51.655 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.656 11:56:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:52.594 11:56:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.594 00:06:52.594 real 0m1.315s 00:06:52.594 user 0m1.200s 00:06:52.594 sys 0m0.129s 00:06:52.594 11:56:42 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.594 11:56:42 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:52.594 ************************************ 00:06:52.594 END TEST accel_crc32c 00:06:52.594 ************************************ 00:06:52.594 11:56:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.594 11:56:42 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:52.594 11:56:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:52.594 11:56:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.594 11:56:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.854 ************************************ 00:06:52.854 START TEST accel_crc32c_C2 00:06:52.854 ************************************ 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:52.854 [2024-07-15 11:56:42.632602] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:52.854 [2024-07-15 11:56:42.632671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958059 ] 00:06:52.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.854 [2024-07-15 11:56:42.703333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.854 [2024-07-15 11:56:42.745061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.854 11:56:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:54.232 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.233 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:54.233 11:56:43 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.233 00:06:54.233 real 0m1.314s 00:06:54.233 user 0m1.205s 00:06:54.233 sys 0m0.121s 00:06:54.233 11:56:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.233 11:56:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:54.233 ************************************ 00:06:54.233 END TEST accel_crc32c_C2 00:06:54.233 ************************************ 00:06:54.233 11:56:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.233 11:56:43 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:54.233 11:56:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:54.233 11:56:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.233 11:56:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.233 ************************************ 00:06:54.233 START TEST accel_copy 00:06:54.233 ************************************ 00:06:54.233 11:56:43 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:54.233 11:56:43 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:54.233 [2024-07-15 11:56:44.013934] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:54.233 [2024-07-15 11:56:44.013985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958307 ] 00:06:54.233 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.233 [2024-07-15 11:56:44.064039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.233 [2024-07-15 11:56:44.104371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.233 11:56:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:55.612 11:56:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.612 00:06:55.612 real 0m1.288s 00:06:55.612 user 0m1.188s 00:06:55.612 sys 0m0.112s 00:06:55.612 11:56:45 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.612 11:56:45 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.612 ************************************ 00:06:55.612 END TEST accel_copy 00:06:55.612 ************************************ 00:06:55.612 11:56:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.612 11:56:45 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.612 11:56:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:55.612 11:56:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.612 11:56:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.612 ************************************ 00:06:55.612 START TEST accel_fill 00:06:55.612 ************************************ 00:06:55.612 11:56:45 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:55.612 [2024-07-15 11:56:45.369214] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:55.612 [2024-07-15 11:56:45.369286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958561 ] 00:06:55.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.612 [2024-07-15 11:56:45.436106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.612 [2024-07-15 11:56:45.475479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.612 11:56:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:57.063 11:56:46 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.063 00:06:57.063 real 0m1.303s 00:06:57.063 user 0m1.200s 00:06:57.063 sys 0m0.118s 00:06:57.063 11:56:46 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.063 11:56:46 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:57.063 ************************************ 00:06:57.063 END TEST accel_fill 00:06:57.063 ************************************ 00:06:57.063 11:56:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.063 11:56:46 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:57.063 11:56:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:57.063 11:56:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.063 11:56:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.063 ************************************ 00:06:57.063 START TEST accel_copy_crc32c 00:06:57.063 ************************************ 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:57.063 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:57.063 [2024-07-15 11:56:46.735866] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:57.064 [2024-07-15 11:56:46.735915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958809 ] 00:06:57.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.064 [2024-07-15 11:56:46.802257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.064 [2024-07-15 11:56:46.842378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.064 11:56:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.440 00:06:58.440 real 0m1.304s 00:06:58.440 user 0m1.201s 00:06:58.440 sys 0m0.118s 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.440 11:56:48 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:58.440 ************************************ 00:06:58.440 END TEST accel_copy_crc32c 00:06:58.440 ************************************ 00:06:58.440 11:56:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.440 11:56:48 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.440 11:56:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:58.440 11:56:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.440 11:56:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.440 ************************************ 00:06:58.440 START TEST accel_copy_crc32c_C2 00:06:58.440 ************************************ 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:58.440 [2024-07-15 11:56:48.106693] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:58.440 [2024-07-15 11:56:48.106755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959055 ] 00:06:58.440 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.440 [2024-07-15 11:56:48.178729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.440 [2024-07-15 11:56:48.219638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.440 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.441 11:56:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.417 00:06:59.417 real 0m1.313s 00:06:59.417 user 0m1.198s 00:06:59.417 sys 0m0.129s 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.417 11:56:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:59.417 ************************************ 00:06:59.417 END TEST accel_copy_crc32c_C2 00:06:59.417 ************************************ 00:06:59.677 11:56:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.677 11:56:49 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:59.677 11:56:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.677 11:56:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.677 11:56:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.677 ************************************ 00:06:59.677 START TEST accel_dualcast 00:06:59.677 ************************************ 00:06:59.677 11:56:49 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:59.677 [2024-07-15 11:56:49.486196] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:06:59.677 [2024-07-15 11:56:49.486271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959310 ] 00:06:59.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.677 [2024-07-15 11:56:49.555421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.677 [2024-07-15 11:56:49.595386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:59.677 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.678 11:56:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:01.057 11:56:50 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.057 00:07:01.057 real 0m1.309s 00:07:01.057 user 0m1.199s 00:07:01.057 sys 0m0.124s 00:07:01.057 11:56:50 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.057 11:56:50 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:01.057 ************************************ 00:07:01.057 END TEST accel_dualcast 00:07:01.057 ************************************ 00:07:01.057 11:56:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.057 11:56:50 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:01.057 11:56:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:01.057 11:56:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.057 11:56:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.057 ************************************ 00:07:01.057 START TEST accel_compare 00:07:01.057 ************************************ 00:07:01.057 11:56:50 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:01.057 11:56:50 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:01.057 [2024-07-15 11:56:50.858587] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:01.057 [2024-07-15 11:56:50.858654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959555 ] 00:07:01.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.057 [2024-07-15 11:56:50.930234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.057 [2024-07-15 11:56:50.971172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.057 11:56:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:02.437 11:56:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.437 00:07:02.437 real 0m1.314s 00:07:02.437 user 0m1.192s 00:07:02.437 sys 0m0.135s 00:07:02.437 11:56:52 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.437 11:56:52 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:02.437 ************************************ 00:07:02.437 END TEST accel_compare 00:07:02.437 ************************************ 00:07:02.437 11:56:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.437 11:56:52 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:02.437 11:56:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.437 11:56:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.437 11:56:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.437 ************************************ 00:07:02.437 START TEST accel_xor 00:07:02.437 ************************************ 00:07:02.437 11:56:52 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:02.437 [2024-07-15 11:56:52.239313] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:02.437 [2024-07-15 11:56:52.239384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959811 ] 00:07:02.437 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.437 [2024-07-15 11:56:52.306694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.437 [2024-07-15 11:56:52.346268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.437 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.438 11:56:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.816 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.817 00:07:03.817 real 0m1.307s 00:07:03.817 user 0m1.190s 00:07:03.817 sys 0m0.130s 00:07:03.817 11:56:53 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.817 11:56:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:03.817 ************************************ 00:07:03.817 END TEST accel_xor 00:07:03.817 ************************************ 00:07:03.817 11:56:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.817 11:56:53 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:03.817 11:56:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:03.817 11:56:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.817 11:56:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.817 ************************************ 00:07:03.817 START TEST accel_xor 00:07:03.817 ************************************ 00:07:03.817 11:56:53 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:03.817 [2024-07-15 11:56:53.608253] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:03.817 [2024-07-15 11:56:53.608309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960056 ] 00:07:03.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.817 [2024-07-15 11:56:53.679328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.817 [2024-07-15 11:56:53.720295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.817 11:56:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:05.196 11:56:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.196 00:07:05.196 real 0m1.309s 00:07:05.196 user 0m1.197s 00:07:05.196 sys 0m0.125s 00:07:05.196 11:56:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.196 11:56:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:05.196 ************************************ 00:07:05.196 END TEST accel_xor 00:07:05.196 ************************************ 00:07:05.196 11:56:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.196 11:56:54 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:05.196 11:56:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:05.196 11:56:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.196 11:56:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.196 ************************************ 00:07:05.196 START TEST accel_dif_verify 00:07:05.196 ************************************ 00:07:05.196 11:56:54 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:05.196 11:56:54 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:05.196 [2024-07-15 11:56:54.987265] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:05.196 [2024-07-15 11:56:54.987333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960303 ] 00:07:05.196 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.196 [2024-07-15 11:56:55.056828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.196 [2024-07-15 11:56:55.096590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:05.196 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.197 11:56:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:06.575 11:56:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.575 00:07:06.575 real 0m1.310s 00:07:06.575 user 0m1.198s 00:07:06.575 sys 0m0.126s 00:07:06.575 11:56:56 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.575 11:56:56 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:06.575 ************************************ 00:07:06.575 END TEST accel_dif_verify 00:07:06.575 ************************************ 00:07:06.575 11:56:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.575 11:56:56 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:06.575 11:56:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:06.575 11:56:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.575 11:56:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.575 ************************************ 00:07:06.575 START TEST accel_dif_generate 00:07:06.575 ************************************ 00:07:06.575 11:56:56 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:06.575 [2024-07-15 11:56:56.363359] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:06.575 [2024-07-15 11:56:56.363426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960554 ] 00:07:06.575 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.575 [2024-07-15 11:56:56.431029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.575 [2024-07-15 11:56:56.472038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.575 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.576 11:56:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.954 11:56:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.955 11:56:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.955 11:56:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:07.955 11:56:57 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.955 00:07:07.955 real 0m1.308s 00:07:07.955 user 0m1.197s 00:07:07.955 sys 0m0.125s 00:07:07.955 11:56:57 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.955 11:56:57 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:07.955 ************************************ 00:07:07.955 END TEST accel_dif_generate 00:07:07.955 ************************************ 00:07:07.955 11:56:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.955 11:56:57 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:07.955 11:56:57 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:07.955 11:56:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.955 11:56:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.955 ************************************ 00:07:07.955 START TEST accel_dif_generate_copy 00:07:07.955 ************************************ 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:07.955 [2024-07-15 11:56:57.737201] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:07.955 [2024-07-15 11:56:57.737255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960808 ] 00:07:07.955 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.955 [2024-07-15 11:56:57.803976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.955 [2024-07-15 11:56:57.842864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.955 11:56:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.334 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:09.335 11:56:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.335 00:07:09.335 real 0m1.303s 00:07:09.335 user 0m1.202s 00:07:09.335 sys 0m0.116s 00:07:09.335 11:56:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.335 11:56:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.335 ************************************ 00:07:09.335 END TEST accel_dif_generate_copy 00:07:09.335 ************************************ 00:07:09.335 11:56:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.335 11:56:59 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:09.335 11:56:59 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.335 11:56:59 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:09.335 11:56:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.335 11:56:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.335 ************************************ 00:07:09.335 START TEST accel_comp 00:07:09.335 ************************************ 00:07:09.335 11:56:59 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:09.335 [2024-07-15 11:56:59.108915] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:09.335 [2024-07-15 11:56:59.108981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961053 ] 00:07:09.335 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.335 [2024-07-15 11:56:59.177851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.335 [2024-07-15 11:56:59.217390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.335 11:56:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:10.720 11:57:00 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.720 00:07:10.720 real 0m1.311s 00:07:10.720 user 0m1.195s 00:07:10.720 sys 0m0.129s 00:07:10.720 11:57:00 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.720 11:57:00 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:10.720 ************************************ 00:07:10.720 END TEST accel_comp 00:07:10.720 ************************************ 00:07:10.720 11:57:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.720 11:57:00 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.720 11:57:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:10.720 11:57:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.720 11:57:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.720 ************************************ 00:07:10.720 START TEST accel_decomp 00:07:10.720 ************************************ 00:07:10.720 11:57:00 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:10.720 [2024-07-15 11:57:00.488557] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:10.720 [2024-07-15 11:57:00.488618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961339 ] 00:07:10.720 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.720 [2024-07-15 11:57:00.557240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.720 [2024-07-15 11:57:00.600917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.720 11:57:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.102 11:57:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.103 11:57:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.103 11:57:01 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.103 00:07:12.103 real 0m1.316s 00:07:12.103 user 0m1.202s 00:07:12.103 sys 0m0.127s 00:07:12.103 11:57:01 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.103 11:57:01 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:12.103 ************************************ 00:07:12.103 END TEST accel_decomp 00:07:12.103 ************************************ 00:07:12.103 11:57:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.103 11:57:01 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:12.103 11:57:01 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:12.103 11:57:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.103 11:57:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.103 ************************************ 00:07:12.103 START TEST accel_decomp_full 00:07:12.103 ************************************ 00:07:12.103 11:57:01 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:12.103 11:57:01 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:12.103 [2024-07-15 11:57:01.872555] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:12.103 [2024-07-15 11:57:01.872604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961663 ] 00:07:12.103 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.103 [2024-07-15 11:57:01.939251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.103 [2024-07-15 11:57:01.980906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.103 11:57:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.480 11:57:03 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.480 00:07:13.480 real 0m1.319s 00:07:13.480 user 0m1.209s 00:07:13.480 sys 0m0.125s 00:07:13.480 11:57:03 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.480 11:57:03 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:13.480 ************************************ 00:07:13.480 END TEST accel_decomp_full 00:07:13.480 ************************************ 00:07:13.480 11:57:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.480 11:57:03 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:13.480 11:57:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:13.480 11:57:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.480 11:57:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.480 ************************************ 00:07:13.480 START TEST accel_decomp_mcore 00:07:13.480 ************************************ 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:13.480 [2024-07-15 11:57:03.261513] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:13.480 [2024-07-15 11:57:03.261571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid961927 ] 00:07:13.480 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.480 [2024-07-15 11:57:03.332963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.480 [2024-07-15 11:57:03.377272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.480 [2024-07-15 11:57:03.377392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.480 [2024-07-15 11:57:03.377497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.480 [2024-07-15 11:57:03.377498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.480 11:57:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.859 00:07:14.859 real 0m1.327s 00:07:14.859 user 0m4.532s 00:07:14.859 sys 0m0.137s 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.859 11:57:04 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:14.859 ************************************ 00:07:14.859 END TEST accel_decomp_mcore 00:07:14.859 ************************************ 00:07:14.859 11:57:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.859 11:57:04 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.859 11:57:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:14.859 11:57:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.859 11:57:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.859 ************************************ 00:07:14.859 START TEST accel_decomp_full_mcore 00:07:14.859 ************************************ 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:14.859 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:14.859 [2024-07-15 11:57:04.647940] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:14.859 [2024-07-15 11:57:04.647994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962181 ] 00:07:14.859 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.859 [2024-07-15 11:57:04.718450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:14.860 [2024-07-15 11:57:04.760875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.860 [2024-07-15 11:57:04.760981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.860 [2024-07-15 11:57:04.761090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.860 [2024-07-15 11:57:04.761090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.860 11:57:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.240 00:07:16.240 real 0m1.335s 00:07:16.240 user 0m4.570s 00:07:16.240 sys 0m0.145s 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.240 11:57:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:16.240 ************************************ 00:07:16.240 END TEST accel_decomp_full_mcore 00:07:16.240 ************************************ 00:07:16.240 11:57:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.240 11:57:05 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.240 11:57:05 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:16.240 11:57:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.240 11:57:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.240 ************************************ 00:07:16.240 START TEST accel_decomp_mthread 00:07:16.240 ************************************ 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:16.240 [2024-07-15 11:57:06.047575] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:16.240 [2024-07-15 11:57:06.047627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962430 ] 00:07:16.240 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.240 [2024-07-15 11:57:06.101878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.240 [2024-07-15 11:57:06.142431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.240 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.241 11:57:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.614 00:07:17.614 real 0m1.301s 00:07:17.614 user 0m1.192s 00:07:17.614 sys 0m0.122s 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.614 11:57:07 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:17.614 ************************************ 00:07:17.614 END TEST accel_decomp_mthread 00:07:17.614 ************************************ 00:07:17.614 11:57:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.614 11:57:07 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.614 11:57:07 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:17.614 11:57:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.614 11:57:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.614 ************************************ 00:07:17.614 START TEST accel_decomp_full_mthread 00:07:17.614 ************************************ 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:17.614 [2024-07-15 11:57:07.416411] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:17.614 [2024-07-15 11:57:07.416474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962679 ] 00:07:17.614 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.614 [2024-07-15 11:57:07.487018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.614 [2024-07-15 11:57:07.526981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.614 11:57:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.988 00:07:18.988 real 0m1.342s 00:07:18.988 user 0m1.226s 00:07:18.988 sys 0m0.129s 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.988 11:57:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:18.988 ************************************ 00:07:18.988 END TEST accel_decomp_full_mthread 00:07:18.988 ************************************ 00:07:18.988 11:57:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.988 11:57:08 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:18.988 11:57:08 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:18.988 11:57:08 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:18.988 11:57:08 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:18.988 11:57:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.988 11:57:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.988 11:57:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.988 11:57:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.988 11:57:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.989 11:57:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.989 11:57:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.989 11:57:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:18.989 11:57:08 accel -- accel/accel.sh@41 -- # jq -r . 00:07:18.989 ************************************ 00:07:18.989 START TEST accel_dif_functional_tests 00:07:18.989 ************************************ 00:07:18.989 11:57:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:18.989 [2024-07-15 11:57:08.847628] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:18.989 [2024-07-15 11:57:08.847663] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid963100 ] 00:07:18.989 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.989 [2024-07-15 11:57:08.914320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.989 [2024-07-15 11:57:08.953952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.989 [2024-07-15 11:57:08.954060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.989 [2024-07-15 11:57:08.954061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.248 00:07:19.248 00:07:19.248 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.248 http://cunit.sourceforge.net/ 00:07:19.248 00:07:19.248 00:07:19.248 Suite: accel_dif 00:07:19.248 Test: verify: DIF generated, GUARD check ...passed 00:07:19.248 Test: verify: DIF generated, APPTAG check ...passed 00:07:19.248 Test: verify: DIF generated, REFTAG check ...passed 00:07:19.248 Test: verify: DIF not generated, GUARD check ...[2024-07-15 11:57:09.017573] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:19.248 passed 00:07:19.248 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 11:57:09.017630] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:19.248 passed 00:07:19.248 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 11:57:09.017650] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:19.248 passed 00:07:19.248 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:19.248 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 11:57:09.017693] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:19.248 passed 00:07:19.248 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:19.248 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:19.248 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:19.248 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 11:57:09.017789] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:19.248 passed 00:07:19.248 Test: verify copy: DIF generated, GUARD check ...passed 00:07:19.248 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:19.248 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:19.248 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 11:57:09.017890] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:19.248 passed 00:07:19.248 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 11:57:09.017910] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:19.248 passed 00:07:19.248 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 11:57:09.017929] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:19.248 passed 00:07:19.248 Test: generate copy: DIF generated, GUARD check ...passed 00:07:19.248 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:19.248 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:19.248 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:19.248 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:19.248 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:19.248 Test: generate copy: iovecs-len validate ...[2024-07-15 11:57:09.018087] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:19.248 passed 00:07:19.248 Test: generate copy: buffer alignment validate ...passed 00:07:19.248 00:07:19.248 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.248 suites 1 1 n/a 0 0 00:07:19.248 tests 26 26 26 0 0 00:07:19.248 asserts 115 115 115 0 n/a 00:07:19.248 00:07:19.248 Elapsed time = 0.000 seconds 00:07:19.248 00:07:19.248 real 0m0.375s 00:07:19.248 user 0m0.560s 00:07:19.248 sys 0m0.154s 00:07:19.248 11:57:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.248 11:57:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:19.248 ************************************ 00:07:19.248 END TEST accel_dif_functional_tests 00:07:19.248 ************************************ 00:07:19.248 11:57:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.248 00:07:19.248 real 0m30.391s 00:07:19.248 user 0m33.962s 00:07:19.248 sys 0m4.493s 00:07:19.248 11:57:09 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.248 11:57:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.248 ************************************ 00:07:19.248 END TEST accel 00:07:19.248 ************************************ 00:07:19.248 11:57:09 -- common/autotest_common.sh@1142 -- # return 0 00:07:19.248 11:57:09 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:19.248 11:57:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.248 11:57:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.248 11:57:09 -- common/autotest_common.sh@10 -- # set +x 00:07:19.508 ************************************ 00:07:19.508 START TEST accel_rpc 00:07:19.508 ************************************ 00:07:19.508 11:57:09 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:19.508 * Looking for test storage... 00:07:19.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:19.508 11:57:09 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:19.508 11:57:09 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=963336 00:07:19.508 11:57:09 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 963336 00:07:19.508 11:57:09 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:19.508 11:57:09 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 963336 ']' 00:07:19.508 11:57:09 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.508 11:57:09 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.508 11:57:09 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.508 11:57:09 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.508 11:57:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.508 [2024-07-15 11:57:09.419872] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:19.508 [2024-07-15 11:57:09.419919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid963336 ] 00:07:19.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.508 [2024-07-15 11:57:09.488389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.804 [2024-07-15 11:57:09.530586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.804 11:57:09 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.804 11:57:09 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:19.804 11:57:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:19.804 11:57:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:19.804 11:57:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:19.804 11:57:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:19.804 11:57:09 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:19.805 11:57:09 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.805 11:57:09 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.805 11:57:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 ************************************ 00:07:19.805 START TEST accel_assign_opcode 00:07:19.805 ************************************ 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 [2024-07-15 11:57:09.591037] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 [2024-07-15 11:57:09.599044] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:19.805 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.063 software 00:07:20.063 00:07:20.063 real 0m0.224s 00:07:20.063 user 0m0.046s 00:07:20.063 sys 0m0.009s 00:07:20.063 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.063 11:57:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:20.063 ************************************ 00:07:20.063 END TEST accel_assign_opcode 00:07:20.063 ************************************ 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:20.063 11:57:09 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 963336 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 963336 ']' 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 963336 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 963336 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 963336' 00:07:20.063 killing process with pid 963336 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@967 -- # kill 963336 00:07:20.063 11:57:09 accel_rpc -- common/autotest_common.sh@972 -- # wait 963336 00:07:20.321 00:07:20.321 real 0m0.904s 00:07:20.321 user 0m0.845s 00:07:20.321 sys 0m0.392s 00:07:20.321 11:57:10 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.321 11:57:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.321 ************************************ 00:07:20.321 END TEST accel_rpc 00:07:20.321 ************************************ 00:07:20.321 11:57:10 -- common/autotest_common.sh@1142 -- # return 0 00:07:20.321 11:57:10 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:20.321 11:57:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.321 11:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.321 11:57:10 -- common/autotest_common.sh@10 -- # set +x 00:07:20.321 ************************************ 00:07:20.321 START TEST app_cmdline 00:07:20.321 ************************************ 00:07:20.321 11:57:10 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:20.579 * Looking for test storage... 00:07:20.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:20.579 11:57:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:20.579 11:57:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=963688 00:07:20.579 11:57:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 963688 00:07:20.580 11:57:10 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:20.580 11:57:10 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 963688 ']' 00:07:20.580 11:57:10 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.580 11:57:10 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.580 11:57:10 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.580 11:57:10 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.580 11:57:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.580 [2024-07-15 11:57:10.388703] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:20.580 [2024-07-15 11:57:10.388756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid963688 ] 00:07:20.580 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.580 [2024-07-15 11:57:10.453987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.580 [2024-07-15 11:57:10.495089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.838 11:57:10 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.838 11:57:10 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:20.838 11:57:10 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:21.096 { 00:07:21.096 "version": "SPDK v24.09-pre git sha1 2728651ee", 00:07:21.096 "fields": { 00:07:21.096 "major": 24, 00:07:21.096 "minor": 9, 00:07:21.096 "patch": 0, 00:07:21.096 "suffix": "-pre", 00:07:21.096 "commit": "2728651ee" 00:07:21.096 } 00:07:21.096 } 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:21.096 11:57:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:21.096 11:57:10 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.096 request: 00:07:21.096 { 00:07:21.096 "method": "env_dpdk_get_mem_stats", 00:07:21.096 "req_id": 1 00:07:21.096 } 00:07:21.096 Got JSON-RPC error response 00:07:21.096 response: 00:07:21.096 { 00:07:21.096 "code": -32601, 00:07:21.096 "message": "Method not found" 00:07:21.096 } 00:07:21.096 11:57:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:21.096 11:57:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.096 11:57:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.096 11:57:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.096 11:57:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 963688 00:07:21.096 11:57:11 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 963688 ']' 00:07:21.096 11:57:11 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 963688 00:07:21.096 11:57:11 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:21.096 11:57:11 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.096 11:57:11 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 963688 00:07:21.354 11:57:11 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.354 11:57:11 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.354 11:57:11 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 963688' 00:07:21.354 killing process with pid 963688 00:07:21.354 11:57:11 app_cmdline -- common/autotest_common.sh@967 -- # kill 963688 00:07:21.354 11:57:11 app_cmdline -- common/autotest_common.sh@972 -- # wait 963688 00:07:21.612 00:07:21.612 real 0m1.160s 00:07:21.612 user 0m1.332s 00:07:21.612 sys 0m0.413s 00:07:21.612 11:57:11 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.612 11:57:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.612 ************************************ 00:07:21.612 END TEST app_cmdline 00:07:21.612 ************************************ 00:07:21.612 11:57:11 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.612 11:57:11 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:21.612 11:57:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.612 11:57:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.612 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:07:21.612 ************************************ 00:07:21.612 START TEST version 00:07:21.612 ************************************ 00:07:21.612 11:57:11 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:21.612 * Looking for test storage... 00:07:21.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:21.612 11:57:11 version -- app/version.sh@17 -- # get_header_version major 00:07:21.612 11:57:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:21.612 11:57:11 version -- app/version.sh@14 -- # cut -f2 00:07:21.612 11:57:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.612 11:57:11 version -- app/version.sh@17 -- # major=24 00:07:21.612 11:57:11 version -- app/version.sh@18 -- # get_header_version minor 00:07:21.612 11:57:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:21.612 11:57:11 version -- app/version.sh@14 -- # cut -f2 00:07:21.612 11:57:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.612 11:57:11 version -- app/version.sh@18 -- # minor=9 00:07:21.612 11:57:11 version -- app/version.sh@19 -- # get_header_version patch 00:07:21.612 11:57:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:21.612 11:57:11 version -- app/version.sh@14 -- # cut -f2 00:07:21.612 11:57:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.612 11:57:11 version -- app/version.sh@19 -- # patch=0 00:07:21.612 11:57:11 version -- app/version.sh@20 -- # get_header_version suffix 00:07:21.612 11:57:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:21.612 11:57:11 version -- app/version.sh@14 -- # cut -f2 00:07:21.612 11:57:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:21.612 11:57:11 version -- app/version.sh@20 -- # suffix=-pre 00:07:21.612 11:57:11 version -- app/version.sh@22 -- # version=24.9 00:07:21.612 11:57:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:21.612 11:57:11 version -- app/version.sh@28 -- # version=24.9rc0 00:07:21.612 11:57:11 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:21.612 11:57:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:21.882 11:57:11 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:21.882 11:57:11 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:21.882 00:07:21.882 real 0m0.156s 00:07:21.882 user 0m0.088s 00:07:21.882 sys 0m0.103s 00:07:21.882 11:57:11 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.882 11:57:11 version -- common/autotest_common.sh@10 -- # set +x 00:07:21.882 ************************************ 00:07:21.882 END TEST version 00:07:21.882 ************************************ 00:07:21.882 11:57:11 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.882 11:57:11 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:21.882 11:57:11 -- spdk/autotest.sh@198 -- # uname -s 00:07:21.882 11:57:11 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:21.882 11:57:11 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:21.882 11:57:11 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:21.882 11:57:11 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:21.882 11:57:11 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:21.882 11:57:11 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:21.882 11:57:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.882 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:07:21.882 11:57:11 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:21.882 11:57:11 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:21.882 11:57:11 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:21.882 11:57:11 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:21.882 11:57:11 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:21.882 11:57:11 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:21.882 11:57:11 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.882 11:57:11 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.882 11:57:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.882 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:07:21.882 ************************************ 00:07:21.882 START TEST nvmf_tcp 00:07:21.882 ************************************ 00:07:21.882 11:57:11 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:21.882 * Looking for test storage... 00:07:21.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.882 11:57:11 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.882 11:57:11 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.882 11:57:11 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.882 11:57:11 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.882 11:57:11 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.882 11:57:11 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.882 11:57:11 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:21.882 11:57:11 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:21.882 11:57:11 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.882 11:57:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:21.882 11:57:11 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:21.882 11:57:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.882 11:57:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.882 11:57:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.141 ************************************ 00:07:22.141 START TEST nvmf_example 00:07:22.141 ************************************ 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:22.141 * Looking for test storage... 00:07:22.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.141 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.142 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:22.142 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:22.142 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.142 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.142 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.142 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.142 11:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.142 11:57:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:28.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.749 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:28.750 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:28.750 Found net devices under 0000:86:00.0: cvl_0_0 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:28.750 Found net devices under 0000:86:00.1: cvl_0_1 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:28.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:07:28.750 00:07:28.750 --- 10.0.0.2 ping statistics --- 00:07:28.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.750 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:07:28.750 00:07:28.750 --- 10.0.0.1 ping statistics --- 00:07:28.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.750 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=967133 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 967133 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 967133 ']' 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.750 11:57:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.750 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.750 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:29.010 11:57:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:29.010 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.232 Initializing NVMe Controllers 00:07:41.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:41.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:41.232 Initialization complete. Launching workers. 00:07:41.232 ======================================================== 00:07:41.232 Latency(us) 00:07:41.232 Device Information : IOPS MiB/s Average min max 00:07:41.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17887.18 69.87 3578.74 708.30 15976.11 00:07:41.232 ======================================================== 00:07:41.232 Total : 17887.18 69.87 3578.74 708.30 15976.11 00:07:41.232 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:41.232 rmmod nvme_tcp 00:07:41.232 rmmod nvme_fabrics 00:07:41.232 rmmod nvme_keyring 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 967133 ']' 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 967133 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 967133 ']' 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 967133 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 967133 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 967133' 00:07:41.232 killing process with pid 967133 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 967133 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 967133 00:07:41.232 nvmf threads initialize successfully 00:07:41.232 bdev subsystem init successfully 00:07:41.232 created a nvmf target service 00:07:41.232 create targets's poll groups done 00:07:41.232 all subsystems of target started 00:07:41.232 nvmf target is running 00:07:41.232 all subsystems of target stopped 00:07:41.232 destroy targets's poll groups done 00:07:41.232 destroyed the nvmf target service 00:07:41.232 bdev subsystem finish successfully 00:07:41.232 nvmf threads destroy successfully 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:41.232 11:57:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.492 11:57:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:41.492 11:57:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:41.492 11:57:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.492 11:57:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.492 00:07:41.492 real 0m19.562s 00:07:41.492 user 0m46.146s 00:07:41.492 sys 0m5.788s 00:07:41.492 11:57:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.492 11:57:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:41.492 ************************************ 00:07:41.492 END TEST nvmf_example 00:07:41.492 ************************************ 00:07:41.754 11:57:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:41.754 11:57:31 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:41.754 11:57:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:41.754 11:57:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.754 11:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.754 ************************************ 00:07:41.754 START TEST nvmf_filesystem 00:07:41.754 ************************************ 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:41.754 * Looking for test storage... 00:07:41.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:41.754 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:41.755 #define SPDK_CONFIG_H 00:07:41.755 #define SPDK_CONFIG_APPS 1 00:07:41.755 #define SPDK_CONFIG_ARCH native 00:07:41.755 #undef SPDK_CONFIG_ASAN 00:07:41.755 #undef SPDK_CONFIG_AVAHI 00:07:41.755 #undef SPDK_CONFIG_CET 00:07:41.755 #define SPDK_CONFIG_COVERAGE 1 00:07:41.755 #define SPDK_CONFIG_CROSS_PREFIX 00:07:41.755 #undef SPDK_CONFIG_CRYPTO 00:07:41.755 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:41.755 #undef SPDK_CONFIG_CUSTOMOCF 00:07:41.755 #undef SPDK_CONFIG_DAOS 00:07:41.755 #define SPDK_CONFIG_DAOS_DIR 00:07:41.755 #define SPDK_CONFIG_DEBUG 1 00:07:41.755 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:41.755 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:41.755 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:41.755 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:41.755 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:41.755 #undef SPDK_CONFIG_DPDK_UADK 00:07:41.755 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:41.755 #define SPDK_CONFIG_EXAMPLES 1 00:07:41.755 #undef SPDK_CONFIG_FC 00:07:41.755 #define SPDK_CONFIG_FC_PATH 00:07:41.755 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:41.755 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:41.755 #undef SPDK_CONFIG_FUSE 00:07:41.755 #undef SPDK_CONFIG_FUZZER 00:07:41.755 #define SPDK_CONFIG_FUZZER_LIB 00:07:41.755 #undef SPDK_CONFIG_GOLANG 00:07:41.755 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:41.755 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:41.755 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:41.755 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:41.755 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:41.755 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:41.755 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:41.755 #define SPDK_CONFIG_IDXD 1 00:07:41.755 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:41.755 #undef SPDK_CONFIG_IPSEC_MB 00:07:41.755 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:41.755 #define SPDK_CONFIG_ISAL 1 00:07:41.755 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:41.755 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:41.755 #define SPDK_CONFIG_LIBDIR 00:07:41.755 #undef SPDK_CONFIG_LTO 00:07:41.755 #define SPDK_CONFIG_MAX_LCORES 128 00:07:41.755 #define SPDK_CONFIG_NVME_CUSE 1 00:07:41.755 #undef SPDK_CONFIG_OCF 00:07:41.755 #define SPDK_CONFIG_OCF_PATH 00:07:41.755 #define SPDK_CONFIG_OPENSSL_PATH 00:07:41.755 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:41.755 #define SPDK_CONFIG_PGO_DIR 00:07:41.755 #undef SPDK_CONFIG_PGO_USE 00:07:41.755 #define SPDK_CONFIG_PREFIX /usr/local 00:07:41.755 #undef SPDK_CONFIG_RAID5F 00:07:41.755 #undef SPDK_CONFIG_RBD 00:07:41.755 #define SPDK_CONFIG_RDMA 1 00:07:41.755 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:41.755 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:41.755 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:41.755 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:41.755 #define SPDK_CONFIG_SHARED 1 00:07:41.755 #undef SPDK_CONFIG_SMA 00:07:41.755 #define SPDK_CONFIG_TESTS 1 00:07:41.755 #undef SPDK_CONFIG_TSAN 00:07:41.755 #define SPDK_CONFIG_UBLK 1 00:07:41.755 #define SPDK_CONFIG_UBSAN 1 00:07:41.755 #undef SPDK_CONFIG_UNIT_TESTS 00:07:41.755 #undef SPDK_CONFIG_URING 00:07:41.755 #define SPDK_CONFIG_URING_PATH 00:07:41.755 #undef SPDK_CONFIG_URING_ZNS 00:07:41.755 #undef SPDK_CONFIG_USDT 00:07:41.755 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:41.755 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:41.755 #define SPDK_CONFIG_VFIO_USER 1 00:07:41.755 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:41.755 #define SPDK_CONFIG_VHOST 1 00:07:41.755 #define SPDK_CONFIG_VIRTIO 1 00:07:41.755 #undef SPDK_CONFIG_VTUNE 00:07:41.755 #define SPDK_CONFIG_VTUNE_DIR 00:07:41.755 #define SPDK_CONFIG_WERROR 1 00:07:41.755 #define SPDK_CONFIG_WPDK_DIR 00:07:41.755 #undef SPDK_CONFIG_XNVME 00:07:41.755 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:41.755 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:41.756 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:41.757 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 969499 ]] 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 969499 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.wNwNuV 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.wNwNuV/tests/target /tmp/spdk.wNwNuV 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=187911086080 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8063213568 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:07:41.758 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986543616 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=606208 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:42.019 * Looking for test storage... 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=187911086080 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10277806080 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.019 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.020 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.020 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.020 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.020 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.020 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.020 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:42.020 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:42.020 11:57:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:42.020 11:57:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.592 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.592 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:48.592 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:48.593 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:48.593 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:48.593 Found net devices under 0000:86:00.0: cvl_0_0 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:48.593 Found net devices under 0000:86:00.1: cvl_0_1 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:07:48.593 00:07:48.593 --- 10.0.0.2 ping statistics --- 00:07:48.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.593 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:07:48.593 00:07:48.593 --- 10.0.0.1 ping statistics --- 00:07:48.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.593 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.593 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.594 ************************************ 00:07:48.594 START TEST nvmf_filesystem_no_in_capsule 00:07:48.594 ************************************ 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=972715 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 972715 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 972715 ']' 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.594 11:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.594 [2024-07-15 11:57:37.720748] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:48.594 [2024-07-15 11:57:37.720795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.594 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.594 [2024-07-15 11:57:37.793901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.594 [2024-07-15 11:57:37.838999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.594 [2024-07-15 11:57:37.839040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.594 [2024-07-15 11:57:37.839047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.594 [2024-07-15 11:57:37.839053] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.594 [2024-07-15 11:57:37.839058] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.594 [2024-07-15 11:57:37.839116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.594 [2024-07-15 11:57:37.839249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.594 [2024-07-15 11:57:37.839335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.594 [2024-07-15 11:57:37.839335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.594 [2024-07-15 11:57:38.574354] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.594 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 Malloc1 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 [2024-07-15 11:57:38.717121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.853 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:48.853 { 00:07:48.853 "name": "Malloc1", 00:07:48.853 "aliases": [ 00:07:48.853 "dacaf16d-d571-49a8-aa3e-66d42a2cb2bb" 00:07:48.853 ], 00:07:48.853 "product_name": "Malloc disk", 00:07:48.853 "block_size": 512, 00:07:48.853 "num_blocks": 1048576, 00:07:48.853 "uuid": "dacaf16d-d571-49a8-aa3e-66d42a2cb2bb", 00:07:48.853 "assigned_rate_limits": { 00:07:48.853 "rw_ios_per_sec": 0, 00:07:48.853 "rw_mbytes_per_sec": 0, 00:07:48.853 "r_mbytes_per_sec": 0, 00:07:48.853 "w_mbytes_per_sec": 0 00:07:48.853 }, 00:07:48.853 "claimed": true, 00:07:48.853 "claim_type": "exclusive_write", 00:07:48.853 "zoned": false, 00:07:48.853 "supported_io_types": { 00:07:48.853 "read": true, 00:07:48.853 "write": true, 00:07:48.853 "unmap": true, 00:07:48.853 "flush": true, 00:07:48.853 "reset": true, 00:07:48.853 "nvme_admin": false, 00:07:48.853 "nvme_io": false, 00:07:48.853 "nvme_io_md": false, 00:07:48.853 "write_zeroes": true, 00:07:48.853 "zcopy": true, 00:07:48.853 "get_zone_info": false, 00:07:48.853 "zone_management": false, 00:07:48.853 "zone_append": false, 00:07:48.853 "compare": false, 00:07:48.853 "compare_and_write": false, 00:07:48.853 "abort": true, 00:07:48.853 "seek_hole": false, 00:07:48.853 "seek_data": false, 00:07:48.853 "copy": true, 00:07:48.853 "nvme_iov_md": false 00:07:48.853 }, 00:07:48.853 "memory_domains": [ 00:07:48.853 { 00:07:48.853 "dma_device_id": "system", 00:07:48.853 "dma_device_type": 1 00:07:48.853 }, 00:07:48.853 { 00:07:48.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.853 "dma_device_type": 2 00:07:48.853 } 00:07:48.853 ], 00:07:48.854 "driver_specific": {} 00:07:48.854 } 00:07:48.854 ]' 00:07:48.854 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:48.854 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:48.854 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:48.854 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:48.854 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:48.854 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:48.854 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:48.854 11:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.233 11:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.233 11:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:50.233 11:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.233 11:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:50.233 11:57:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:52.137 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:52.704 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:52.704 11:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.706 ************************************ 00:07:53.706 START TEST filesystem_ext4 00:07:53.706 ************************************ 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:53.706 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:53.707 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:53.707 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:53.707 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:53.707 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:53.707 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:53.707 mke2fs 1.46.5 (30-Dec-2021) 00:07:53.707 Discarding device blocks: 0/522240 done 00:07:53.707 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:53.707 Filesystem UUID: e30a8030-64d1-4198-aa49-79b464365ab1 00:07:53.707 Superblock backups stored on blocks: 00:07:53.707 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:53.707 00:07:53.707 Allocating group tables: 0/64 done 00:07:53.707 Writing inode tables: 0/64 done 00:07:53.963 Creating journal (8192 blocks): done 00:07:53.963 Writing superblocks and filesystem accounting information: 0/64 done 00:07:53.963 00:07:53.963 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:53.964 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.964 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.964 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:53.964 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.964 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:54.221 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:54.221 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.221 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 972715 00:07:54.221 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.221 11:57:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.221 00:07:54.221 real 0m0.413s 00:07:54.221 user 0m0.026s 00:07:54.221 sys 0m0.060s 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:54.221 ************************************ 00:07:54.221 END TEST filesystem_ext4 00:07:54.221 ************************************ 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.221 ************************************ 00:07:54.221 START TEST filesystem_btrfs 00:07:54.221 ************************************ 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:54.221 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:54.478 btrfs-progs v6.6.2 00:07:54.478 See https://btrfs.readthedocs.io for more information. 00:07:54.478 00:07:54.478 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:54.478 NOTE: several default settings have changed in version 5.15, please make sure 00:07:54.478 this does not affect your deployments: 00:07:54.478 - DUP for metadata (-m dup) 00:07:54.478 - enabled no-holes (-O no-holes) 00:07:54.478 - enabled free-space-tree (-R free-space-tree) 00:07:54.478 00:07:54.478 Label: (null) 00:07:54.478 UUID: 9c847a49-66e3-4245-8673-f7964c38d52c 00:07:54.478 Node size: 16384 00:07:54.478 Sector size: 4096 00:07:54.478 Filesystem size: 510.00MiB 00:07:54.478 Block group profiles: 00:07:54.478 Data: single 8.00MiB 00:07:54.478 Metadata: DUP 32.00MiB 00:07:54.478 System: DUP 8.00MiB 00:07:54.478 SSD detected: yes 00:07:54.478 Zoned device: no 00:07:54.478 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:54.478 Runtime features: free-space-tree 00:07:54.478 Checksum: crc32c 00:07:54.478 Number of devices: 1 00:07:54.478 Devices: 00:07:54.478 ID SIZE PATH 00:07:54.478 1 510.00MiB /dev/nvme0n1p1 00:07:54.478 00:07:54.478 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:54.478 11:57:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 972715 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.412 00:07:55.412 real 0m1.292s 00:07:55.412 user 0m0.021s 00:07:55.412 sys 0m0.129s 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:55.412 ************************************ 00:07:55.412 END TEST filesystem_btrfs 00:07:55.412 ************************************ 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.412 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.671 ************************************ 00:07:55.671 START TEST filesystem_xfs 00:07:55.671 ************************************ 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:55.671 11:57:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:55.671 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:55.671 = sectsz=512 attr=2, projid32bit=1 00:07:55.671 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:55.671 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:55.671 data = bsize=4096 blocks=130560, imaxpct=25 00:07:55.671 = sunit=0 swidth=0 blks 00:07:55.671 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:55.671 log =internal log bsize=4096 blocks=16384, version=2 00:07:55.671 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:55.671 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:56.608 Discarding blocks...Done. 00:07:56.608 11:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:56.608 11:57:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 972715 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.193 00:07:59.193 real 0m3.616s 00:07:59.193 user 0m0.022s 00:07:59.193 sys 0m0.074s 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.193 ************************************ 00:07:59.193 END TEST filesystem_xfs 00:07:59.193 ************************************ 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:59.193 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 972715 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 972715 ']' 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 972715 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 972715 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 972715' 00:07:59.453 killing process with pid 972715 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 972715 00:07:59.453 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 972715 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:59.713 00:07:59.713 real 0m11.953s 00:07:59.713 user 0m47.002s 00:07:59.713 sys 0m1.232s 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 ************************************ 00:07:59.713 END TEST nvmf_filesystem_no_in_capsule 00:07:59.713 ************************************ 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 ************************************ 00:07:59.713 START TEST nvmf_filesystem_in_capsule 00:07:59.713 ************************************ 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=974821 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 974821 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 974821 ']' 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.713 11:57:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.972 [2024-07-15 11:57:49.745278] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:07:59.973 [2024-07-15 11:57:49.745322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.973 [2024-07-15 11:57:49.816790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.973 [2024-07-15 11:57:49.858635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.973 [2024-07-15 11:57:49.858675] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.973 [2024-07-15 11:57:49.858682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.973 [2024-07-15 11:57:49.858689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.973 [2024-07-15 11:57:49.858694] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.973 [2024-07-15 11:57:49.858738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.973 [2024-07-15 11:57:49.858847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.973 [2024-07-15 11:57:49.858958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.973 [2024-07-15 11:57:49.858959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.555 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.555 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:00.555 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.555 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.555 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.815 [2024-07-15 11:57:50.592220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.815 Malloc1 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.815 [2024-07-15 11:57:50.743418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:00.815 { 00:08:00.815 "name": "Malloc1", 00:08:00.815 "aliases": [ 00:08:00.815 "157fa0f3-4aa9-4286-819b-de88c1de99f6" 00:08:00.815 ], 00:08:00.815 "product_name": "Malloc disk", 00:08:00.815 "block_size": 512, 00:08:00.815 "num_blocks": 1048576, 00:08:00.815 "uuid": "157fa0f3-4aa9-4286-819b-de88c1de99f6", 00:08:00.815 "assigned_rate_limits": { 00:08:00.815 "rw_ios_per_sec": 0, 00:08:00.815 "rw_mbytes_per_sec": 0, 00:08:00.815 "r_mbytes_per_sec": 0, 00:08:00.815 "w_mbytes_per_sec": 0 00:08:00.815 }, 00:08:00.815 "claimed": true, 00:08:00.815 "claim_type": "exclusive_write", 00:08:00.815 "zoned": false, 00:08:00.815 "supported_io_types": { 00:08:00.815 "read": true, 00:08:00.815 "write": true, 00:08:00.815 "unmap": true, 00:08:00.815 "flush": true, 00:08:00.815 "reset": true, 00:08:00.815 "nvme_admin": false, 00:08:00.815 "nvme_io": false, 00:08:00.815 "nvme_io_md": false, 00:08:00.815 "write_zeroes": true, 00:08:00.815 "zcopy": true, 00:08:00.815 "get_zone_info": false, 00:08:00.815 "zone_management": false, 00:08:00.815 "zone_append": false, 00:08:00.815 "compare": false, 00:08:00.815 "compare_and_write": false, 00:08:00.815 "abort": true, 00:08:00.815 "seek_hole": false, 00:08:00.815 "seek_data": false, 00:08:00.815 "copy": true, 00:08:00.815 "nvme_iov_md": false 00:08:00.815 }, 00:08:00.815 "memory_domains": [ 00:08:00.815 { 00:08:00.815 "dma_device_id": "system", 00:08:00.815 "dma_device_type": 1 00:08:00.815 }, 00:08:00.815 { 00:08:00.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.815 "dma_device_type": 2 00:08:00.815 } 00:08:00.815 ], 00:08:00.815 "driver_specific": {} 00:08:00.815 } 00:08:00.815 ]' 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:00.815 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:01.073 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:01.073 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:01.073 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:01.073 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:01.073 11:57:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.010 11:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.010 11:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:02.010 11:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.010 11:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:02.010 11:57:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:04.565 11:57:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:04.565 11:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:04.565 11:57:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.503 ************************************ 00:08:05.503 START TEST filesystem_in_capsule_ext4 00:08:05.503 ************************************ 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:05.503 11:57:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:05.503 mke2fs 1.46.5 (30-Dec-2021) 00:08:05.503 Discarding device blocks: 0/522240 done 00:08:05.503 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:05.503 Filesystem UUID: 25709f29-bac3-418f-96b5-239574a1aad2 00:08:05.503 Superblock backups stored on blocks: 00:08:05.503 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:05.503 00:08:05.503 Allocating group tables: 0/64 done 00:08:05.503 Writing inode tables: 0/64 done 00:08:06.879 Creating journal (8192 blocks): done 00:08:07.704 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:08:07.705 00:08:07.705 11:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:07.705 11:57:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.270 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 974821 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.528 00:08:08.528 real 0m2.947s 00:08:08.528 user 0m0.027s 00:08:08.528 sys 0m0.066s 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:08.528 ************************************ 00:08:08.528 END TEST filesystem_in_capsule_ext4 00:08:08.528 ************************************ 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.528 ************************************ 00:08:08.528 START TEST filesystem_in_capsule_btrfs 00:08:08.528 ************************************ 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:08.528 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:08.786 btrfs-progs v6.6.2 00:08:08.786 See https://btrfs.readthedocs.io for more information. 00:08:08.786 00:08:08.786 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:08.786 NOTE: several default settings have changed in version 5.15, please make sure 00:08:08.786 this does not affect your deployments: 00:08:08.786 - DUP for metadata (-m dup) 00:08:08.786 - enabled no-holes (-O no-holes) 00:08:08.786 - enabled free-space-tree (-R free-space-tree) 00:08:08.786 00:08:08.786 Label: (null) 00:08:08.786 UUID: 5326762c-749f-4612-a3c5-f8e9a5308998 00:08:08.786 Node size: 16384 00:08:08.786 Sector size: 4096 00:08:08.786 Filesystem size: 510.00MiB 00:08:08.786 Block group profiles: 00:08:08.786 Data: single 8.00MiB 00:08:08.786 Metadata: DUP 32.00MiB 00:08:08.786 System: DUP 8.00MiB 00:08:08.786 SSD detected: yes 00:08:08.786 Zoned device: no 00:08:08.786 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:08.786 Runtime features: free-space-tree 00:08:08.786 Checksum: crc32c 00:08:08.786 Number of devices: 1 00:08:08.786 Devices: 00:08:08.786 ID SIZE PATH 00:08:08.786 1 510.00MiB /dev/nvme0n1p1 00:08:08.786 00:08:08.786 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:08.786 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 974821 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.044 00:08:09.044 real 0m0.475s 00:08:09.044 user 0m0.023s 00:08:09.044 sys 0m0.127s 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.044 ************************************ 00:08:09.044 END TEST filesystem_in_capsule_btrfs 00:08:09.044 ************************************ 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.044 ************************************ 00:08:09.044 START TEST filesystem_in_capsule_xfs 00:08:09.044 ************************************ 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:09.044 11:57:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:09.303 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:09.303 = sectsz=512 attr=2, projid32bit=1 00:08:09.303 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:09.303 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:09.303 data = bsize=4096 blocks=130560, imaxpct=25 00:08:09.303 = sunit=0 swidth=0 blks 00:08:09.303 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:09.303 log =internal log bsize=4096 blocks=16384, version=2 00:08:09.303 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:09.303 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:10.238 Discarding blocks...Done. 00:08:10.238 11:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:10.238 11:58:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 974821 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.772 00:08:12.772 real 0m3.475s 00:08:12.772 user 0m0.030s 00:08:12.772 sys 0m0.066s 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.772 ************************************ 00:08:12.772 END TEST filesystem_in_capsule_xfs 00:08:12.772 ************************************ 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:12.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 974821 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 974821 ']' 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 974821 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 974821 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 974821' 00:08:12.772 killing process with pid 974821 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 974821 00:08:12.772 11:58:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 974821 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:13.341 00:08:13.341 real 0m13.363s 00:08:13.341 user 0m52.624s 00:08:13.341 sys 0m1.245s 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.341 ************************************ 00:08:13.341 END TEST nvmf_filesystem_in_capsule 00:08:13.341 ************************************ 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.341 rmmod nvme_tcp 00:08:13.341 rmmod nvme_fabrics 00:08:13.341 rmmod nvme_keyring 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.341 11:58:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.284 11:58:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.284 00:08:15.284 real 0m33.680s 00:08:15.284 user 1m41.457s 00:08:15.284 sys 0m7.014s 00:08:15.284 11:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.284 11:58:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.284 ************************************ 00:08:15.284 END TEST nvmf_filesystem 00:08:15.284 ************************************ 00:08:15.284 11:58:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:15.284 11:58:05 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.284 11:58:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.284 11:58:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.284 11:58:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.544 ************************************ 00:08:15.544 START TEST nvmf_target_discovery 00:08:15.544 ************************************ 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.544 * Looking for test storage... 00:08:15.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:15.544 11:58:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.545 11:58:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.118 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:22.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:22.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:22.119 Found net devices under 0000:86:00.0: cvl_0_0 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:22.119 Found net devices under 0000:86:00.1: cvl_0_1 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.119 11:58:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:22.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:08:22.119 00:08:22.119 --- 10.0.0.2 ping statistics --- 00:08:22.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.119 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:08:22.119 00:08:22.119 --- 10.0.0.1 ping statistics --- 00:08:22.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.119 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=980749 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 980749 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 980749 ']' 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.119 [2024-07-15 11:58:11.268561] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:08:22.119 [2024-07-15 11:58:11.268607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.119 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.119 [2024-07-15 11:58:11.327867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.119 [2024-07-15 11:58:11.369985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.119 [2024-07-15 11:58:11.370023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.119 [2024-07-15 11:58:11.370032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.119 [2024-07-15 11:58:11.370038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.119 [2024-07-15 11:58:11.370044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.119 [2024-07-15 11:58:11.370111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.119 [2024-07-15 11:58:11.370256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.119 [2024-07-15 11:58:11.370277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.119 [2024-07-15 11:58:11.370278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.119 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 [2024-07-15 11:58:11.531401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 Null1 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 [2024-07-15 11:58:11.576693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 Null2 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 Null3 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 Null4 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:08:22.120 00:08:22.120 Discovery Log Number of Records 6, Generation counter 6 00:08:22.120 =====Discovery Log Entry 0====== 00:08:22.120 trtype: tcp 00:08:22.120 adrfam: ipv4 00:08:22.120 subtype: current discovery subsystem 00:08:22.120 treq: not required 00:08:22.120 portid: 0 00:08:22.120 trsvcid: 4420 00:08:22.120 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:22.120 traddr: 10.0.0.2 00:08:22.120 eflags: explicit discovery connections, duplicate discovery information 00:08:22.120 sectype: none 00:08:22.120 =====Discovery Log Entry 1====== 00:08:22.120 trtype: tcp 00:08:22.120 adrfam: ipv4 00:08:22.120 subtype: nvme subsystem 00:08:22.120 treq: not required 00:08:22.120 portid: 0 00:08:22.120 trsvcid: 4420 00:08:22.120 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:22.120 traddr: 10.0.0.2 00:08:22.120 eflags: none 00:08:22.120 sectype: none 00:08:22.120 =====Discovery Log Entry 2====== 00:08:22.120 trtype: tcp 00:08:22.120 adrfam: ipv4 00:08:22.120 subtype: nvme subsystem 00:08:22.120 treq: not required 00:08:22.120 portid: 0 00:08:22.120 trsvcid: 4420 00:08:22.120 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:22.120 traddr: 10.0.0.2 00:08:22.120 eflags: none 00:08:22.120 sectype: none 00:08:22.120 =====Discovery Log Entry 3====== 00:08:22.120 trtype: tcp 00:08:22.120 adrfam: ipv4 00:08:22.120 subtype: nvme subsystem 00:08:22.120 treq: not required 00:08:22.120 portid: 0 00:08:22.120 trsvcid: 4420 00:08:22.120 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:22.120 traddr: 10.0.0.2 00:08:22.120 eflags: none 00:08:22.120 sectype: none 00:08:22.120 =====Discovery Log Entry 4====== 00:08:22.120 trtype: tcp 00:08:22.120 adrfam: ipv4 00:08:22.120 subtype: nvme subsystem 00:08:22.120 treq: not required 00:08:22.120 portid: 0 00:08:22.120 trsvcid: 4420 00:08:22.120 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:22.120 traddr: 10.0.0.2 00:08:22.120 eflags: none 00:08:22.120 sectype: none 00:08:22.120 =====Discovery Log Entry 5====== 00:08:22.120 trtype: tcp 00:08:22.120 adrfam: ipv4 00:08:22.120 subtype: discovery subsystem referral 00:08:22.120 treq: not required 00:08:22.120 portid: 0 00:08:22.120 trsvcid: 4430 00:08:22.120 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:22.120 traddr: 10.0.0.2 00:08:22.120 eflags: none 00:08:22.120 sectype: none 00:08:22.120 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:22.120 Perform nvmf subsystem discovery via RPC 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 [ 00:08:22.121 { 00:08:22.121 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:22.121 "subtype": "Discovery", 00:08:22.121 "listen_addresses": [ 00:08:22.121 { 00:08:22.121 "trtype": "TCP", 00:08:22.121 "adrfam": "IPv4", 00:08:22.121 "traddr": "10.0.0.2", 00:08:22.121 "trsvcid": "4420" 00:08:22.121 } 00:08:22.121 ], 00:08:22.121 "allow_any_host": true, 00:08:22.121 "hosts": [] 00:08:22.121 }, 00:08:22.121 { 00:08:22.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.121 "subtype": "NVMe", 00:08:22.121 "listen_addresses": [ 00:08:22.121 { 00:08:22.121 "trtype": "TCP", 00:08:22.121 "adrfam": "IPv4", 00:08:22.121 "traddr": "10.0.0.2", 00:08:22.121 "trsvcid": "4420" 00:08:22.121 } 00:08:22.121 ], 00:08:22.121 "allow_any_host": true, 00:08:22.121 "hosts": [], 00:08:22.121 "serial_number": "SPDK00000000000001", 00:08:22.121 "model_number": "SPDK bdev Controller", 00:08:22.121 "max_namespaces": 32, 00:08:22.121 "min_cntlid": 1, 00:08:22.121 "max_cntlid": 65519, 00:08:22.121 "namespaces": [ 00:08:22.121 { 00:08:22.121 "nsid": 1, 00:08:22.121 "bdev_name": "Null1", 00:08:22.121 "name": "Null1", 00:08:22.121 "nguid": "3F4D46B27CB3408AB2E6FDB3E5036A08", 00:08:22.121 "uuid": "3f4d46b2-7cb3-408a-b2e6-fdb3e5036a08" 00:08:22.121 } 00:08:22.121 ] 00:08:22.121 }, 00:08:22.121 { 00:08:22.121 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:22.121 "subtype": "NVMe", 00:08:22.121 "listen_addresses": [ 00:08:22.121 { 00:08:22.121 "trtype": "TCP", 00:08:22.121 "adrfam": "IPv4", 00:08:22.121 "traddr": "10.0.0.2", 00:08:22.121 "trsvcid": "4420" 00:08:22.121 } 00:08:22.121 ], 00:08:22.121 "allow_any_host": true, 00:08:22.121 "hosts": [], 00:08:22.121 "serial_number": "SPDK00000000000002", 00:08:22.121 "model_number": "SPDK bdev Controller", 00:08:22.121 "max_namespaces": 32, 00:08:22.121 "min_cntlid": 1, 00:08:22.121 "max_cntlid": 65519, 00:08:22.121 "namespaces": [ 00:08:22.121 { 00:08:22.121 "nsid": 1, 00:08:22.121 "bdev_name": "Null2", 00:08:22.121 "name": "Null2", 00:08:22.121 "nguid": "937F1EBDF4ED4C449A109D3D19A031C2", 00:08:22.121 "uuid": "937f1ebd-f4ed-4c44-9a10-9d3d19a031c2" 00:08:22.121 } 00:08:22.121 ] 00:08:22.121 }, 00:08:22.121 { 00:08:22.121 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:22.121 "subtype": "NVMe", 00:08:22.121 "listen_addresses": [ 00:08:22.121 { 00:08:22.121 "trtype": "TCP", 00:08:22.121 "adrfam": "IPv4", 00:08:22.121 "traddr": "10.0.0.2", 00:08:22.121 "trsvcid": "4420" 00:08:22.121 } 00:08:22.121 ], 00:08:22.121 "allow_any_host": true, 00:08:22.121 "hosts": [], 00:08:22.121 "serial_number": "SPDK00000000000003", 00:08:22.121 "model_number": "SPDK bdev Controller", 00:08:22.121 "max_namespaces": 32, 00:08:22.121 "min_cntlid": 1, 00:08:22.121 "max_cntlid": 65519, 00:08:22.121 "namespaces": [ 00:08:22.121 { 00:08:22.121 "nsid": 1, 00:08:22.121 "bdev_name": "Null3", 00:08:22.121 "name": "Null3", 00:08:22.121 "nguid": "EB0D7B31C9464A719DD7EAFDB8AF8BB8", 00:08:22.121 "uuid": "eb0d7b31-c946-4a71-9dd7-eafdb8af8bb8" 00:08:22.121 } 00:08:22.121 ] 00:08:22.121 }, 00:08:22.121 { 00:08:22.121 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:22.121 "subtype": "NVMe", 00:08:22.121 "listen_addresses": [ 00:08:22.121 { 00:08:22.121 "trtype": "TCP", 00:08:22.121 "adrfam": "IPv4", 00:08:22.121 "traddr": "10.0.0.2", 00:08:22.121 "trsvcid": "4420" 00:08:22.121 } 00:08:22.121 ], 00:08:22.121 "allow_any_host": true, 00:08:22.121 "hosts": [], 00:08:22.121 "serial_number": "SPDK00000000000004", 00:08:22.121 "model_number": "SPDK bdev Controller", 00:08:22.121 "max_namespaces": 32, 00:08:22.121 "min_cntlid": 1, 00:08:22.121 "max_cntlid": 65519, 00:08:22.121 "namespaces": [ 00:08:22.121 { 00:08:22.121 "nsid": 1, 00:08:22.121 "bdev_name": "Null4", 00:08:22.121 "name": "Null4", 00:08:22.121 "nguid": "B594EBB6F9F6421E912C1CB029F99DFA", 00:08:22.121 "uuid": "b594ebb6-f9f6-421e-912c-1cb029f99dfa" 00:08:22.121 } 00:08:22.121 ] 00:08:22.121 } 00:08:22.121 ] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.121 rmmod nvme_tcp 00:08:22.121 rmmod nvme_fabrics 00:08:22.121 rmmod nvme_keyring 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 980749 ']' 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 980749 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 980749 ']' 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 980749 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:22.121 11:58:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980749 00:08:22.121 11:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:22.121 11:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:22.121 11:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980749' 00:08:22.121 killing process with pid 980749 00:08:22.122 11:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 980749 00:08:22.122 11:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 980749 00:08:22.381 11:58:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.381 11:58:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.381 11:58:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.381 11:58:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.381 11:58:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.381 11:58:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.381 11:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.381 11:58:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.287 11:58:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.287 00:08:24.287 real 0m8.981s 00:08:24.287 user 0m4.948s 00:08:24.287 sys 0m4.724s 00:08:24.287 11:58:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.287 11:58:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.287 ************************************ 00:08:24.287 END TEST nvmf_target_discovery 00:08:24.287 ************************************ 00:08:24.545 11:58:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:24.545 11:58:14 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:24.545 11:58:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:24.545 11:58:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.545 11:58:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.545 ************************************ 00:08:24.546 START TEST nvmf_referrals 00:08:24.546 ************************************ 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:24.546 * Looking for test storage... 00:08:24.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:24.546 11:58:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.121 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:31.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:31.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:31.122 Found net devices under 0000:86:00.0: cvl_0_0 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:31.122 Found net devices under 0000:86:00.1: cvl_0_1 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:08:31.122 00:08:31.122 --- 10.0.0.2 ping statistics --- 00:08:31.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.122 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:08:31.122 00:08:31.122 --- 10.0.0.1 ping statistics --- 00:08:31.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.122 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=984423 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 984423 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 984423 ']' 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.122 [2024-07-15 11:58:20.340979] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:08:31.122 [2024-07-15 11:58:20.341020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.122 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.122 [2024-07-15 11:58:20.410511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.122 [2024-07-15 11:58:20.452673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.122 [2024-07-15 11:58:20.452713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.122 [2024-07-15 11:58:20.452720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.122 [2024-07-15 11:58:20.452726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.122 [2024-07-15 11:58:20.452732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.122 [2024-07-15 11:58:20.452780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.122 [2024-07-15 11:58:20.452812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.122 [2024-07-15 11:58:20.452924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.122 [2024-07-15 11:58:20.452925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.122 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.122 [2024-07-15 11:58:20.588333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 [2024-07-15 11:58:20.601741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:31.123 11:58:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:31.123 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:31.123 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:31.123 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:31.123 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.123 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.383 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:31.641 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:31.641 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:31.641 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:31.641 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:31.641 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.641 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:31.899 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.900 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:32.158 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:32.158 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:32.158 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:32.158 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:32.159 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.159 11:58:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:32.159 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:32.417 rmmod nvme_tcp 00:08:32.417 rmmod nvme_fabrics 00:08:32.417 rmmod nvme_keyring 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 984423 ']' 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 984423 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 984423 ']' 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 984423 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 984423 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 984423' 00:08:32.417 killing process with pid 984423 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 984423 00:08:32.417 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 984423 00:08:32.676 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.676 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.676 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.676 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.676 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.676 11:58:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.676 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.676 11:58:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.579 11:58:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.579 00:08:34.579 real 0m10.193s 00:08:34.579 user 0m10.317s 00:08:34.579 sys 0m5.055s 00:08:34.579 11:58:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.579 11:58:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.579 ************************************ 00:08:34.579 END TEST nvmf_referrals 00:08:34.579 ************************************ 00:08:34.579 11:58:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:34.579 11:58:24 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:34.579 11:58:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:34.579 11:58:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.579 11:58:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.839 ************************************ 00:08:34.839 START TEST nvmf_connect_disconnect 00:08:34.839 ************************************ 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:34.839 * Looking for test storage... 00:08:34.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.839 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.840 11:58:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:41.415 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:41.415 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.415 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:41.416 Found net devices under 0000:86:00.0: cvl_0_0 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:41.416 Found net devices under 0000:86:00.1: cvl_0_1 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:08:41.416 00:08:41.416 --- 10.0.0.2 ping statistics --- 00:08:41.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.416 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:08:41.416 00:08:41.416 --- 10.0.0.1 ping statistics --- 00:08:41.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.416 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=988275 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 988275 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 988275 ']' 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.416 [2024-07-15 11:58:30.632358] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:08:41.416 [2024-07-15 11:58:30.632405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.416 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.416 [2024-07-15 11:58:30.701643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.416 [2024-07-15 11:58:30.743520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.416 [2024-07-15 11:58:30.743560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.416 [2024-07-15 11:58:30.743567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.416 [2024-07-15 11:58:30.743573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.416 [2024-07-15 11:58:30.743577] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.416 [2024-07-15 11:58:30.743619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.416 [2024-07-15 11:58:30.743657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.416 [2024-07-15 11:58:30.743766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.416 [2024-07-15 11:58:30.743767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.416 [2024-07-15 11:58:30.884279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.416 [2024-07-15 11:58:30.935885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:41.416 11:58:30 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:43.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.773 [2024-07-15 11:59:19.383114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddb3d0 is same with the state(5) to be set 00:09:29.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.477 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.897 rmmod nvme_tcp 00:12:30.897 rmmod nvme_fabrics 00:12:30.897 rmmod nvme_keyring 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 988275 ']' 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 988275 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 988275 ']' 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 988275 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 988275 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 988275' 00:12:30.897 killing process with pid 988275 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 988275 00:12:30.897 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 988275 00:12:31.156 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.156 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.156 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.157 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.157 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.157 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.157 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.157 12:02:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.061 12:02:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.321 00:12:33.321 real 3m58.454s 00:12:33.321 user 15m14.187s 00:12:33.321 sys 0m20.159s 00:12:33.321 12:02:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.321 12:02:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.321 ************************************ 00:12:33.321 END TEST nvmf_connect_disconnect 00:12:33.321 ************************************ 00:12:33.321 12:02:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:33.321 12:02:23 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:33.321 12:02:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:33.321 12:02:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.321 12:02:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:33.321 ************************************ 00:12:33.321 START TEST nvmf_multitarget 00:12:33.321 ************************************ 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:33.321 * Looking for test storage... 00:12:33.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.321 12:02:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.322 12:02:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:39.896 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:39.896 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:39.896 Found net devices under 0000:86:00.0: cvl_0_0 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:39.896 Found net devices under 0000:86:00.1: cvl_0_1 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.896 12:02:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.896 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.896 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.896 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:12:39.896 00:12:39.896 --- 10.0.0.2 ping statistics --- 00:12:39.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.896 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:12:39.896 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:12:39.896 00:12:39.896 --- 10.0.0.1 ping statistics --- 00:12:39.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.897 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1031451 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1031451 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1031451 ']' 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.897 [2024-07-15 12:02:29.137259] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:12:39.897 [2024-07-15 12:02:29.137307] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.897 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.897 [2024-07-15 12:02:29.210663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.897 [2024-07-15 12:02:29.253515] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.897 [2024-07-15 12:02:29.253557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.897 [2024-07-15 12:02:29.253564] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.897 [2024-07-15 12:02:29.253571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.897 [2024-07-15 12:02:29.253577] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.897 [2024-07-15 12:02:29.253640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.897 [2024-07-15 12:02:29.253678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.897 [2024-07-15 12:02:29.253789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.897 [2024-07-15 12:02:29.253790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:39.897 "nvmf_tgt_1" 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:39.897 "nvmf_tgt_2" 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:39.897 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:39.897 true 00:12:40.156 12:02:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:40.156 true 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:40.156 rmmod nvme_tcp 00:12:40.156 rmmod nvme_fabrics 00:12:40.156 rmmod nvme_keyring 00:12:40.156 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1031451 ']' 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1031451 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1031451 ']' 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1031451 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1031451 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1031451' 00:12:40.414 killing process with pid 1031451 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1031451 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1031451 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.414 12:02:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.985 12:02:32 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:42.985 00:12:42.985 real 0m9.325s 00:12:42.985 user 0m6.696s 00:12:42.985 sys 0m4.846s 00:12:42.985 12:02:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.985 12:02:32 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:42.985 ************************************ 00:12:42.985 END TEST nvmf_multitarget 00:12:42.985 ************************************ 00:12:42.985 12:02:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:42.985 12:02:32 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.985 12:02:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:42.985 12:02:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.985 12:02:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:42.985 ************************************ 00:12:42.985 START TEST nvmf_rpc 00:12:42.985 ************************************ 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:42.985 * Looking for test storage... 00:12:42.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.985 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:42.986 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:42.986 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:42.986 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.986 12:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.986 12:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.986 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:42.986 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:42.986 12:02:32 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:42.986 12:02:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:48.262 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:48.262 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:48.262 Found net devices under 0000:86:00.0: cvl_0_0 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:48.262 Found net devices under 0000:86:00.1: cvl_0_1 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.262 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:12:48.522 00:12:48.522 --- 10.0.0.2 ping statistics --- 00:12:48.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.522 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:12:48.522 00:12:48.522 --- 10.0.0.1 ping statistics --- 00:12:48.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.522 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1035164 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1035164 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1035164 ']' 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.522 12:02:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.522 [2024-07-15 12:02:38.493909] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:12:48.522 [2024-07-15 12:02:38.493957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.522 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.781 [2024-07-15 12:02:38.565570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.781 [2024-07-15 12:02:38.607580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.781 [2024-07-15 12:02:38.607620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.781 [2024-07-15 12:02:38.607627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.781 [2024-07-15 12:02:38.607633] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.781 [2024-07-15 12:02:38.607639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.781 [2024-07-15 12:02:38.607683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.781 [2024-07-15 12:02:38.607792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.781 [2024-07-15 12:02:38.607899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.781 [2024-07-15 12:02:38.607901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.348 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.348 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:49.348 12:02:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:49.348 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.348 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.348 12:02:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.348 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:49.348 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.349 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:49.608 "tick_rate": 2300000000, 00:12:49.608 "poll_groups": [ 00:12:49.608 { 00:12:49.608 "name": "nvmf_tgt_poll_group_000", 00:12:49.608 "admin_qpairs": 0, 00:12:49.608 "io_qpairs": 0, 00:12:49.608 "current_admin_qpairs": 0, 00:12:49.608 "current_io_qpairs": 0, 00:12:49.608 "pending_bdev_io": 0, 00:12:49.608 "completed_nvme_io": 0, 00:12:49.608 "transports": [] 00:12:49.608 }, 00:12:49.608 { 00:12:49.608 "name": "nvmf_tgt_poll_group_001", 00:12:49.608 "admin_qpairs": 0, 00:12:49.608 "io_qpairs": 0, 00:12:49.608 "current_admin_qpairs": 0, 00:12:49.608 "current_io_qpairs": 0, 00:12:49.608 "pending_bdev_io": 0, 00:12:49.608 "completed_nvme_io": 0, 00:12:49.608 "transports": [] 00:12:49.608 }, 00:12:49.608 { 00:12:49.608 "name": "nvmf_tgt_poll_group_002", 00:12:49.608 "admin_qpairs": 0, 00:12:49.608 "io_qpairs": 0, 00:12:49.608 "current_admin_qpairs": 0, 00:12:49.608 "current_io_qpairs": 0, 00:12:49.608 "pending_bdev_io": 0, 00:12:49.608 "completed_nvme_io": 0, 00:12:49.608 "transports": [] 00:12:49.608 }, 00:12:49.608 { 00:12:49.608 "name": "nvmf_tgt_poll_group_003", 00:12:49.608 "admin_qpairs": 0, 00:12:49.608 "io_qpairs": 0, 00:12:49.608 "current_admin_qpairs": 0, 00:12:49.608 "current_io_qpairs": 0, 00:12:49.608 "pending_bdev_io": 0, 00:12:49.608 "completed_nvme_io": 0, 00:12:49.608 "transports": [] 00:12:49.608 } 00:12:49.608 ] 00:12:49.608 }' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.608 [2024-07-15 12:02:39.458762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:49.608 "tick_rate": 2300000000, 00:12:49.608 "poll_groups": [ 00:12:49.608 { 00:12:49.608 "name": "nvmf_tgt_poll_group_000", 00:12:49.608 "admin_qpairs": 0, 00:12:49.608 "io_qpairs": 0, 00:12:49.608 "current_admin_qpairs": 0, 00:12:49.608 "current_io_qpairs": 0, 00:12:49.608 "pending_bdev_io": 0, 00:12:49.608 "completed_nvme_io": 0, 00:12:49.608 "transports": [ 00:12:49.608 { 00:12:49.608 "trtype": "TCP" 00:12:49.608 } 00:12:49.608 ] 00:12:49.608 }, 00:12:49.608 { 00:12:49.608 "name": "nvmf_tgt_poll_group_001", 00:12:49.608 "admin_qpairs": 0, 00:12:49.608 "io_qpairs": 0, 00:12:49.608 "current_admin_qpairs": 0, 00:12:49.608 "current_io_qpairs": 0, 00:12:49.608 "pending_bdev_io": 0, 00:12:49.608 "completed_nvme_io": 0, 00:12:49.608 "transports": [ 00:12:49.608 { 00:12:49.608 "trtype": "TCP" 00:12:49.608 } 00:12:49.608 ] 00:12:49.608 }, 00:12:49.608 { 00:12:49.608 "name": "nvmf_tgt_poll_group_002", 00:12:49.608 "admin_qpairs": 0, 00:12:49.608 "io_qpairs": 0, 00:12:49.608 "current_admin_qpairs": 0, 00:12:49.608 "current_io_qpairs": 0, 00:12:49.608 "pending_bdev_io": 0, 00:12:49.608 "completed_nvme_io": 0, 00:12:49.608 "transports": [ 00:12:49.608 { 00:12:49.608 "trtype": "TCP" 00:12:49.608 } 00:12:49.608 ] 00:12:49.608 }, 00:12:49.608 { 00:12:49.608 "name": "nvmf_tgt_poll_group_003", 00:12:49.608 "admin_qpairs": 0, 00:12:49.608 "io_qpairs": 0, 00:12:49.608 "current_admin_qpairs": 0, 00:12:49.608 "current_io_qpairs": 0, 00:12:49.608 "pending_bdev_io": 0, 00:12:49.608 "completed_nvme_io": 0, 00:12:49.608 "transports": [ 00:12:49.608 { 00:12:49.608 "trtype": "TCP" 00:12:49.608 } 00:12:49.608 ] 00:12:49.608 } 00:12:49.608 ] 00:12:49.608 }' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.608 Malloc1 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.608 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.867 [2024-07-15 12:02:39.626836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:49.867 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:49.868 [2024-07-15 12:02:39.655386] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:49.868 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:49.868 could not add new controller: failed to write to nvme-fabrics device 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.868 12:02:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.803 12:02:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.803 12:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.804 12:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.804 12:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:50.804 12:02:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:53.336 12:02:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.336 [2024-07-15 12:02:43.048756] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:53.336 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:53.336 could not add new controller: failed to write to nvme-fabrics device 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.336 12:02:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.270 12:02:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.270 12:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:54.270 12:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.270 12:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:54.270 12:02:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.823 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.824 [2024-07-15 12:02:46.386010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.824 12:02:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.758 12:02:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.758 12:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:57.758 12:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.758 12:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:57.758 12:02:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.661 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 [2024-07-15 12:02:49.664732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.920 12:02:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.856 12:02:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.856 12:02:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.856 12:02:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.856 12:02:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:00.856 12:02:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:02.822 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:02.822 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:02.822 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.822 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:02.822 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.822 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:02.822 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 [2024-07-15 12:02:52.946973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.080 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.081 12:02:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.081 12:02:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.453 12:02:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.453 12:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.453 12:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.453 12:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.453 12:02:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.358 [2024-07-15 12:02:56.245951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.358 12:02:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.736 12:02:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.736 12:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.736 12:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.736 12:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:07.736 12:02:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.641 [2024-07-15 12:02:59.562729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.641 12:02:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.020 12:03:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.020 12:03:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:11.020 12:03:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.020 12:03:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:11.020 12:03:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 [2024-07-15 12:03:02.860908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 [2024-07-15 12:03:02.913031] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.927 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 [2024-07-15 12:03:02.965217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 [2024-07-15 12:03:03.013369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.187 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.188 [2024-07-15 12:03:03.061523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:13.188 "tick_rate": 2300000000, 00:13:13.188 "poll_groups": [ 00:13:13.188 { 00:13:13.188 "name": "nvmf_tgt_poll_group_000", 00:13:13.188 "admin_qpairs": 2, 00:13:13.188 "io_qpairs": 168, 00:13:13.188 "current_admin_qpairs": 0, 00:13:13.188 "current_io_qpairs": 0, 00:13:13.188 "pending_bdev_io": 0, 00:13:13.188 "completed_nvme_io": 219, 00:13:13.188 "transports": [ 00:13:13.188 { 00:13:13.188 "trtype": "TCP" 00:13:13.188 } 00:13:13.188 ] 00:13:13.188 }, 00:13:13.188 { 00:13:13.188 "name": "nvmf_tgt_poll_group_001", 00:13:13.188 "admin_qpairs": 2, 00:13:13.188 "io_qpairs": 168, 00:13:13.188 "current_admin_qpairs": 0, 00:13:13.188 "current_io_qpairs": 0, 00:13:13.188 "pending_bdev_io": 0, 00:13:13.188 "completed_nvme_io": 317, 00:13:13.188 "transports": [ 00:13:13.188 { 00:13:13.188 "trtype": "TCP" 00:13:13.188 } 00:13:13.188 ] 00:13:13.188 }, 00:13:13.188 { 00:13:13.188 "name": "nvmf_tgt_poll_group_002", 00:13:13.188 "admin_qpairs": 1, 00:13:13.188 "io_qpairs": 168, 00:13:13.188 "current_admin_qpairs": 0, 00:13:13.188 "current_io_qpairs": 0, 00:13:13.188 "pending_bdev_io": 0, 00:13:13.188 "completed_nvme_io": 219, 00:13:13.188 "transports": [ 00:13:13.188 { 00:13:13.188 "trtype": "TCP" 00:13:13.188 } 00:13:13.188 ] 00:13:13.188 }, 00:13:13.188 { 00:13:13.188 "name": "nvmf_tgt_poll_group_003", 00:13:13.188 "admin_qpairs": 2, 00:13:13.188 "io_qpairs": 168, 00:13:13.188 "current_admin_qpairs": 0, 00:13:13.188 "current_io_qpairs": 0, 00:13:13.188 "pending_bdev_io": 0, 00:13:13.188 "completed_nvme_io": 267, 00:13:13.188 "transports": [ 00:13:13.188 { 00:13:13.188 "trtype": "TCP" 00:13:13.188 } 00:13:13.188 ] 00:13:13.188 } 00:13:13.188 ] 00:13:13.188 }' 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.188 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.447 rmmod nvme_tcp 00:13:13.447 rmmod nvme_fabrics 00:13:13.447 rmmod nvme_keyring 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1035164 ']' 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1035164 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1035164 ']' 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1035164 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1035164 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1035164' 00:13:13.447 killing process with pid 1035164 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1035164 00:13:13.447 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1035164 00:13:13.706 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.706 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.706 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.706 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.706 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.706 12:03:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.706 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.706 12:03:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.612 12:03:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.612 00:13:15.612 real 0m33.053s 00:13:15.612 user 1m41.066s 00:13:15.612 sys 0m6.102s 00:13:15.612 12:03:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.612 12:03:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.612 ************************************ 00:13:15.612 END TEST nvmf_rpc 00:13:15.612 ************************************ 00:13:15.873 12:03:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:15.873 12:03:05 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:15.873 12:03:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.873 12:03:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.873 12:03:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.873 ************************************ 00:13:15.873 START TEST nvmf_invalid 00:13:15.873 ************************************ 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:15.873 * Looking for test storage... 00:13:15.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.873 12:03:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:22.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:22.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:22.444 Found net devices under 0000:86:00.0: cvl_0_0 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:22.444 Found net devices under 0000:86:00.1: cvl_0_1 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:22.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:13:22.444 00:13:22.444 --- 10.0.0.2 ping statistics --- 00:13:22.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.444 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:13:22.444 00:13:22.444 --- 10.0.0.1 ping statistics --- 00:13:22.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.444 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1043457 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1043457 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1043457 ']' 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.444 12:03:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.445 12:03:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.445 [2024-07-15 12:03:11.624269] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:13:22.445 [2024-07-15 12:03:11.624311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.445 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.445 [2024-07-15 12:03:11.697279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.445 [2024-07-15 12:03:11.738947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.445 [2024-07-15 12:03:11.738985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.445 [2024-07-15 12:03:11.738992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.445 [2024-07-15 12:03:11.739002] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.445 [2024-07-15 12:03:11.739007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.445 [2024-07-15 12:03:11.739049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.445 [2024-07-15 12:03:11.739161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.445 [2024-07-15 12:03:11.739269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.445 [2024-07-15 12:03:11.739270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.445 12:03:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:22.445 12:03:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:22.445 12:03:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.445 12:03:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:22.445 12:03:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.703 12:03:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.703 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:22.703 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7619 00:13:22.703 [2024-07-15 12:03:12.610730] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:22.703 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:22.703 { 00:13:22.703 "nqn": "nqn.2016-06.io.spdk:cnode7619", 00:13:22.703 "tgt_name": "foobar", 00:13:22.703 "method": "nvmf_create_subsystem", 00:13:22.703 "req_id": 1 00:13:22.703 } 00:13:22.703 Got JSON-RPC error response 00:13:22.703 response: 00:13:22.703 { 00:13:22.703 "code": -32603, 00:13:22.703 "message": "Unable to find target foobar" 00:13:22.703 }' 00:13:22.703 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:22.703 { 00:13:22.703 "nqn": "nqn.2016-06.io.spdk:cnode7619", 00:13:22.703 "tgt_name": "foobar", 00:13:22.703 "method": "nvmf_create_subsystem", 00:13:22.703 "req_id": 1 00:13:22.703 } 00:13:22.703 Got JSON-RPC error response 00:13:22.703 response: 00:13:22.703 { 00:13:22.703 "code": -32603, 00:13:22.703 "message": "Unable to find target foobar" 00:13:22.703 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:22.703 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:22.703 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8581 00:13:22.961 [2024-07-15 12:03:12.807461] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8581: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:22.961 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:22.961 { 00:13:22.961 "nqn": "nqn.2016-06.io.spdk:cnode8581", 00:13:22.961 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:22.961 "method": "nvmf_create_subsystem", 00:13:22.961 "req_id": 1 00:13:22.961 } 00:13:22.961 Got JSON-RPC error response 00:13:22.961 response: 00:13:22.961 { 00:13:22.961 "code": -32602, 00:13:22.961 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:22.961 }' 00:13:22.961 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:22.961 { 00:13:22.961 "nqn": "nqn.2016-06.io.spdk:cnode8581", 00:13:22.961 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:22.961 "method": "nvmf_create_subsystem", 00:13:22.961 "req_id": 1 00:13:22.961 } 00:13:22.961 Got JSON-RPC error response 00:13:22.961 response: 00:13:22.961 { 00:13:22.961 "code": -32602, 00:13:22.961 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:22.961 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:22.961 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:22.961 12:03:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16664 00:13:23.221 [2024-07-15 12:03:13.004065] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16664: invalid model number 'SPDK_Controller' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:23.221 { 00:13:23.221 "nqn": "nqn.2016-06.io.spdk:cnode16664", 00:13:23.221 "model_number": "SPDK_Controller\u001f", 00:13:23.221 "method": "nvmf_create_subsystem", 00:13:23.221 "req_id": 1 00:13:23.221 } 00:13:23.221 Got JSON-RPC error response 00:13:23.221 response: 00:13:23.221 { 00:13:23.221 "code": -32602, 00:13:23.221 "message": "Invalid MN SPDK_Controller\u001f" 00:13:23.221 }' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:23.221 { 00:13:23.221 "nqn": "nqn.2016-06.io.spdk:cnode16664", 00:13:23.221 "model_number": "SPDK_Controller\u001f", 00:13:23.221 "method": "nvmf_create_subsystem", 00:13:23.221 "req_id": 1 00:13:23.221 } 00:13:23.221 Got JSON-RPC error response 00:13:23.221 response: 00:13:23.221 { 00:13:23.221 "code": -32602, 00:13:23.221 "message": "Invalid MN SPDK_Controller\u001f" 00:13:23.221 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:23.221 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'h.;NB"H/$]]G|gJ)UX;R;' 00:13:23.222 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'h.;NB"H/$]]G|gJ)UX;R;' nqn.2016-06.io.spdk:cnode26384 00:13:23.482 [2024-07-15 12:03:13.325132] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26384: invalid serial number 'h.;NB"H/$]]G|gJ)UX;R;' 00:13:23.482 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:23.482 { 00:13:23.482 "nqn": "nqn.2016-06.io.spdk:cnode26384", 00:13:23.483 "serial_number": "h.;NB\"H/$]]G|gJ)UX;R;", 00:13:23.483 "method": "nvmf_create_subsystem", 00:13:23.483 "req_id": 1 00:13:23.483 } 00:13:23.483 Got JSON-RPC error response 00:13:23.483 response: 00:13:23.483 { 00:13:23.483 "code": -32602, 00:13:23.483 "message": "Invalid SN h.;NB\"H/$]]G|gJ)UX;R;" 00:13:23.483 }' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:23.483 { 00:13:23.483 "nqn": "nqn.2016-06.io.spdk:cnode26384", 00:13:23.483 "serial_number": "h.;NB\"H/$]]G|gJ)UX;R;", 00:13:23.483 "method": "nvmf_create_subsystem", 00:13:23.483 "req_id": 1 00:13:23.483 } 00:13:23.483 Got JSON-RPC error response 00:13:23.483 response: 00:13:23.483 { 00:13:23.483 "code": -32602, 00:13:23.483 "message": "Invalid SN h.;NB\"H/$]]G|gJ)UX;R;" 00:13:23.483 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:23.483 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:23.484 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '5%\R[m_Zeex.W'\'']Q:Dx_(5U<{_`{fnR{[9pd=W2o5' 00:13:23.778 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '5%\R[m_Zeex.W'\'']Q:Dx_(5U<{_`{fnR{[9pd=W2o5' nqn.2016-06.io.spdk:cnode20301 00:13:24.038 [2024-07-15 12:03:13.766699] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20301: invalid model number '5%\R[m_Zeex.W']Q:Dx_(5U<{_`{fnR{[9pd=W2o5' 00:13:24.038 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:24.038 { 00:13:24.038 "nqn": "nqn.2016-06.io.spdk:cnode20301", 00:13:24.038 "model_number": "5%\\R[m_Zeex.W'\'']Q:Dx_(5U<{_`{fnR{[9pd=W2o5", 00:13:24.038 "method": "nvmf_create_subsystem", 00:13:24.038 "req_id": 1 00:13:24.038 } 00:13:24.038 Got JSON-RPC error response 00:13:24.038 response: 00:13:24.038 { 00:13:24.038 "code": -32602, 00:13:24.038 "message": "Invalid MN 5%\\R[m_Zeex.W'\'']Q:Dx_(5U<{_`{fnR{[9pd=W2o5" 00:13:24.038 }' 00:13:24.038 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:24.038 { 00:13:24.038 "nqn": "nqn.2016-06.io.spdk:cnode20301", 00:13:24.038 "model_number": "5%\\R[m_Zeex.W']Q:Dx_(5U<{_`{fnR{[9pd=W2o5", 00:13:24.038 "method": "nvmf_create_subsystem", 00:13:24.038 "req_id": 1 00:13:24.038 } 00:13:24.038 Got JSON-RPC error response 00:13:24.038 response: 00:13:24.038 { 00:13:24.038 "code": -32602, 00:13:24.038 "message": "Invalid MN 5%\\R[m_Zeex.W']Q:Dx_(5U<{_`{fnR{[9pd=W2o5" 00:13:24.038 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:24.038 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:24.038 [2024-07-15 12:03:13.951385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.038 12:03:13 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:24.297 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:24.297 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:24.297 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:24.297 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:24.297 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:24.555 [2024-07-15 12:03:14.344728] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:24.555 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:24.555 { 00:13:24.555 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:24.555 "listen_address": { 00:13:24.555 "trtype": "tcp", 00:13:24.555 "traddr": "", 00:13:24.555 "trsvcid": "4421" 00:13:24.555 }, 00:13:24.555 "method": "nvmf_subsystem_remove_listener", 00:13:24.555 "req_id": 1 00:13:24.555 } 00:13:24.555 Got JSON-RPC error response 00:13:24.555 response: 00:13:24.555 { 00:13:24.555 "code": -32602, 00:13:24.555 "message": "Invalid parameters" 00:13:24.555 }' 00:13:24.555 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:24.555 { 00:13:24.555 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:24.555 "listen_address": { 00:13:24.555 "trtype": "tcp", 00:13:24.555 "traddr": "", 00:13:24.555 "trsvcid": "4421" 00:13:24.555 }, 00:13:24.555 "method": "nvmf_subsystem_remove_listener", 00:13:24.555 "req_id": 1 00:13:24.555 } 00:13:24.555 Got JSON-RPC error response 00:13:24.555 response: 00:13:24.555 { 00:13:24.555 "code": -32602, 00:13:24.555 "message": "Invalid parameters" 00:13:24.555 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:24.555 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9305 -i 0 00:13:24.555 [2024-07-15 12:03:14.529300] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9305: invalid cntlid range [0-65519] 00:13:24.813 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:24.813 { 00:13:24.813 "nqn": "nqn.2016-06.io.spdk:cnode9305", 00:13:24.813 "min_cntlid": 0, 00:13:24.813 "method": "nvmf_create_subsystem", 00:13:24.813 "req_id": 1 00:13:24.813 } 00:13:24.813 Got JSON-RPC error response 00:13:24.813 response: 00:13:24.813 { 00:13:24.813 "code": -32602, 00:13:24.813 "message": "Invalid cntlid range [0-65519]" 00:13:24.813 }' 00:13:24.813 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:24.813 { 00:13:24.813 "nqn": "nqn.2016-06.io.spdk:cnode9305", 00:13:24.813 "min_cntlid": 0, 00:13:24.813 "method": "nvmf_create_subsystem", 00:13:24.813 "req_id": 1 00:13:24.813 } 00:13:24.813 Got JSON-RPC error response 00:13:24.813 response: 00:13:24.813 { 00:13:24.813 "code": -32602, 00:13:24.813 "message": "Invalid cntlid range [0-65519]" 00:13:24.813 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:24.813 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26788 -i 65520 00:13:24.813 [2024-07-15 12:03:14.713919] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26788: invalid cntlid range [65520-65519] 00:13:24.813 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:24.813 { 00:13:24.813 "nqn": "nqn.2016-06.io.spdk:cnode26788", 00:13:24.813 "min_cntlid": 65520, 00:13:24.813 "method": "nvmf_create_subsystem", 00:13:24.813 "req_id": 1 00:13:24.813 } 00:13:24.813 Got JSON-RPC error response 00:13:24.813 response: 00:13:24.813 { 00:13:24.813 "code": -32602, 00:13:24.813 "message": "Invalid cntlid range [65520-65519]" 00:13:24.813 }' 00:13:24.813 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:24.813 { 00:13:24.813 "nqn": "nqn.2016-06.io.spdk:cnode26788", 00:13:24.813 "min_cntlid": 65520, 00:13:24.813 "method": "nvmf_create_subsystem", 00:13:24.813 "req_id": 1 00:13:24.813 } 00:13:24.814 Got JSON-RPC error response 00:13:24.814 response: 00:13:24.814 { 00:13:24.814 "code": -32602, 00:13:24.814 "message": "Invalid cntlid range [65520-65519]" 00:13:24.814 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:24.814 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4139 -I 0 00:13:25.072 [2024-07-15 12:03:14.902595] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4139: invalid cntlid range [1-0] 00:13:25.072 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:25.072 { 00:13:25.072 "nqn": "nqn.2016-06.io.spdk:cnode4139", 00:13:25.072 "max_cntlid": 0, 00:13:25.072 "method": "nvmf_create_subsystem", 00:13:25.072 "req_id": 1 00:13:25.072 } 00:13:25.072 Got JSON-RPC error response 00:13:25.072 response: 00:13:25.072 { 00:13:25.072 "code": -32602, 00:13:25.072 "message": "Invalid cntlid range [1-0]" 00:13:25.072 }' 00:13:25.072 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:25.072 { 00:13:25.072 "nqn": "nqn.2016-06.io.spdk:cnode4139", 00:13:25.072 "max_cntlid": 0, 00:13:25.072 "method": "nvmf_create_subsystem", 00:13:25.072 "req_id": 1 00:13:25.072 } 00:13:25.072 Got JSON-RPC error response 00:13:25.072 response: 00:13:25.072 { 00:13:25.072 "code": -32602, 00:13:25.072 "message": "Invalid cntlid range [1-0]" 00:13:25.072 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:25.072 12:03:14 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30753 -I 65520 00:13:25.331 [2024-07-15 12:03:15.091244] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30753: invalid cntlid range [1-65520] 00:13:25.331 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:25.331 { 00:13:25.331 "nqn": "nqn.2016-06.io.spdk:cnode30753", 00:13:25.331 "max_cntlid": 65520, 00:13:25.331 "method": "nvmf_create_subsystem", 00:13:25.331 "req_id": 1 00:13:25.331 } 00:13:25.331 Got JSON-RPC error response 00:13:25.331 response: 00:13:25.331 { 00:13:25.331 "code": -32602, 00:13:25.331 "message": "Invalid cntlid range [1-65520]" 00:13:25.331 }' 00:13:25.331 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:25.331 { 00:13:25.331 "nqn": "nqn.2016-06.io.spdk:cnode30753", 00:13:25.331 "max_cntlid": 65520, 00:13:25.331 "method": "nvmf_create_subsystem", 00:13:25.331 "req_id": 1 00:13:25.331 } 00:13:25.331 Got JSON-RPC error response 00:13:25.331 response: 00:13:25.331 { 00:13:25.331 "code": -32602, 00:13:25.331 "message": "Invalid cntlid range [1-65520]" 00:13:25.331 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:25.331 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32317 -i 6 -I 5 00:13:25.331 [2024-07-15 12:03:15.271879] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32317: invalid cntlid range [6-5] 00:13:25.331 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:25.331 { 00:13:25.331 "nqn": "nqn.2016-06.io.spdk:cnode32317", 00:13:25.331 "min_cntlid": 6, 00:13:25.331 "max_cntlid": 5, 00:13:25.331 "method": "nvmf_create_subsystem", 00:13:25.331 "req_id": 1 00:13:25.331 } 00:13:25.331 Got JSON-RPC error response 00:13:25.331 response: 00:13:25.331 { 00:13:25.331 "code": -32602, 00:13:25.331 "message": "Invalid cntlid range [6-5]" 00:13:25.331 }' 00:13:25.331 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:25.331 { 00:13:25.331 "nqn": "nqn.2016-06.io.spdk:cnode32317", 00:13:25.331 "min_cntlid": 6, 00:13:25.331 "max_cntlid": 5, 00:13:25.331 "method": "nvmf_create_subsystem", 00:13:25.331 "req_id": 1 00:13:25.331 } 00:13:25.331 Got JSON-RPC error response 00:13:25.331 response: 00:13:25.331 { 00:13:25.331 "code": -32602, 00:13:25.331 "message": "Invalid cntlid range [6-5]" 00:13:25.331 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:25.331 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:25.590 { 00:13:25.590 "name": "foobar", 00:13:25.590 "method": "nvmf_delete_target", 00:13:25.590 "req_id": 1 00:13:25.590 } 00:13:25.590 Got JSON-RPC error response 00:13:25.590 response: 00:13:25.590 { 00:13:25.590 "code": -32602, 00:13:25.590 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:25.590 }' 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:25.590 { 00:13:25.590 "name": "foobar", 00:13:25.590 "method": "nvmf_delete_target", 00:13:25.590 "req_id": 1 00:13:25.590 } 00:13:25.590 Got JSON-RPC error response 00:13:25.590 response: 00:13:25.590 { 00:13:25.590 "code": -32602, 00:13:25.590 "message": "The specified target doesn't exist, cannot delete it." 00:13:25.590 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.590 rmmod nvme_tcp 00:13:25.590 rmmod nvme_fabrics 00:13:25.590 rmmod nvme_keyring 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1043457 ']' 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1043457 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1043457 ']' 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1043457 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1043457 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1043457' 00:13:25.590 killing process with pid 1043457 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1043457 00:13:25.590 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1043457 00:13:25.849 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.849 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.849 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.849 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.849 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.849 12:03:15 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.849 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.849 12:03:15 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.384 12:03:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:28.384 00:13:28.384 real 0m12.122s 00:13:28.384 user 0m19.728s 00:13:28.384 sys 0m5.314s 00:13:28.384 12:03:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.384 12:03:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:28.384 ************************************ 00:13:28.384 END TEST nvmf_invalid 00:13:28.384 ************************************ 00:13:28.384 12:03:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:28.384 12:03:17 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:28.384 12:03:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:28.384 12:03:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.384 12:03:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.384 ************************************ 00:13:28.384 START TEST nvmf_abort 00:13:28.384 ************************************ 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:28.384 * Looking for test storage... 00:13:28.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.384 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.385 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.385 12:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.385 12:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.385 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:28.385 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:28.385 12:03:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.385 12:03:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:33.683 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.683 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:33.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:33.684 Found net devices under 0000:86:00.0: cvl_0_0 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:33.684 Found net devices under 0000:86:00.1: cvl_0_1 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.684 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:13:33.943 00:13:33.943 --- 10.0.0.2 ping statistics --- 00:13:33.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.943 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:13:33.943 00:13:33.943 --- 10.0.0.1 ping statistics --- 00:13:33.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.943 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1047661 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1047661 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1047661 ']' 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.943 12:03:23 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:33.943 [2024-07-15 12:03:23.813176] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:13:33.943 [2024-07-15 12:03:23.813223] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.943 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.943 [2024-07-15 12:03:23.885915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:33.943 [2024-07-15 12:03:23.927074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.943 [2024-07-15 12:03:23.927114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.943 [2024-07-15 12:03:23.927121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.943 [2024-07-15 12:03:23.927127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.943 [2024-07-15 12:03:23.927132] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.943 [2024-07-15 12:03:23.927253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.943 [2024-07-15 12:03:23.927360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.943 [2024-07-15 12:03:23.927361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 [2024-07-15 12:03:24.057212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 Malloc0 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 Delay0 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 [2024-07-15 12:03:24.132338] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.203 12:03:24 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:34.203 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.462 [2024-07-15 12:03:24.249032] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:36.368 Initializing NVMe Controllers 00:13:36.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:36.368 controller IO queue size 128 less than required 00:13:36.368 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:36.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:36.368 Initialization complete. Launching workers. 00:13:36.368 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 43128 00:13:36.368 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43189, failed to submit 62 00:13:36.368 success 43132, unsuccess 57, failed 0 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.368 rmmod nvme_tcp 00:13:36.368 rmmod nvme_fabrics 00:13:36.368 rmmod nvme_keyring 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1047661 ']' 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1047661 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1047661 ']' 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1047661 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.368 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1047661 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1047661' 00:13:36.627 killing process with pid 1047661 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1047661 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1047661 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.627 12:03:26 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.166 12:03:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:39.166 00:13:39.166 real 0m10.811s 00:13:39.166 user 0m11.119s 00:13:39.166 sys 0m5.230s 00:13:39.166 12:03:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.166 12:03:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:39.166 ************************************ 00:13:39.166 END TEST nvmf_abort 00:13:39.166 ************************************ 00:13:39.166 12:03:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:39.166 12:03:28 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:39.166 12:03:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:39.166 12:03:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.166 12:03:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:39.166 ************************************ 00:13:39.166 START TEST nvmf_ns_hotplug_stress 00:13:39.166 ************************************ 00:13:39.166 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:39.166 * Looking for test storage... 00:13:39.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.166 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.166 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:39.166 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.166 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.166 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.166 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:39.167 12:03:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:44.445 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:44.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:44.445 Found net devices under 0000:86:00.0: cvl_0_0 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:44.445 Found net devices under 0000:86:00.1: cvl_0_1 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.445 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:44.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:13:44.705 00:13:44.705 --- 10.0.0.2 ping statistics --- 00:13:44.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.705 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:13:44.705 00:13:44.705 --- 10.0.0.1 ping statistics --- 00:13:44.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.705 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1051661 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1051661 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1051661 ']' 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.705 12:03:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.705 [2024-07-15 12:03:34.658483] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:13:44.705 [2024-07-15 12:03:34.658527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.705 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.965 [2024-07-15 12:03:34.729903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.965 [2024-07-15 12:03:34.769359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.965 [2024-07-15 12:03:34.769399] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.965 [2024-07-15 12:03:34.769406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.965 [2024-07-15 12:03:34.769412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.965 [2024-07-15 12:03:34.769417] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.965 [2024-07-15 12:03:34.769527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.965 [2024-07-15 12:03:34.769634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.965 [2024-07-15 12:03:34.769636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.533 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.533 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:45.533 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.533 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:45.533 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.533 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.533 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:45.533 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:45.792 [2024-07-15 12:03:35.661626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.792 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:46.055 12:03:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.055 [2024-07-15 12:03:36.014937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.055 12:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:46.351 12:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:46.610 Malloc0 00:13:46.610 12:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:46.610 Delay0 00:13:46.610 12:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.869 12:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:47.128 NULL1 00:13:47.128 12:03:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:47.388 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1052148 00:13:47.388 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:47.388 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:47.388 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.388 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.388 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.647 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:47.647 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:47.907 true 00:13:47.907 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:47.907 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.907 12:03:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.166 12:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:48.166 12:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:48.426 true 00:13:48.426 12:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:48.426 12:03:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.804 Read completed with error (sct=0, sc=11) 00:13:49.804 12:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.804 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.804 12:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:49.804 12:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:49.804 true 00:13:50.064 12:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:50.064 12:03:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.891 12:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.891 12:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:50.891 12:03:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:51.149 true 00:13:51.149 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:51.149 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.407 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.407 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:51.407 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:51.665 true 00:13:51.665 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:51.665 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.924 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.924 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:51.924 12:03:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:52.182 true 00:13:52.182 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:52.182 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.441 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.699 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:52.699 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:52.699 true 00:13:52.699 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:52.699 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.958 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.216 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:53.216 12:03:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:53.216 true 00:13:53.216 12:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:53.216 12:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.475 12:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.733 12:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:53.733 12:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:53.733 true 00:13:53.733 12:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:53.733 12:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.992 12:03:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.250 12:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:54.250 12:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:54.509 true 00:13:54.509 12:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:54.509 12:03:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.446 12:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.446 12:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:55.446 12:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:55.704 true 00:13:55.704 12:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:55.704 12:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.704 12:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.964 12:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:55.964 12:03:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:56.223 true 00:13:56.223 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:56.223 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.223 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.481 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:56.481 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:56.741 true 00:13:56.741 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:56.741 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.999 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.999 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:56.999 12:03:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:57.258 true 00:13:57.258 12:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:57.258 12:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.518 12:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.518 12:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:57.518 12:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:57.776 true 00:13:57.776 12:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:57.776 12:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.036 12:03:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.036 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:58.036 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:58.295 true 00:13:58.295 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:58.295 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.557 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.557 [2024-07-15 12:03:48.537747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.537824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.537865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.537905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.537944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.537991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.557 [2024-07-15 12:03:48.538876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.538918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.538955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.538991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.539988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.540968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.541992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.542958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.558 [2024-07-15 12:03:48.543876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.543919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.543963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.544966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.545962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.546960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.547467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.548996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.549038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.549077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.549113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.549148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.549184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.549229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.549274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.549320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.559 [2024-07-15 12:03:48.549363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.549962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.550886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.551712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.552962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.553995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.560 [2024-07-15 12:03:48.554471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.554902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:58.561 [2024-07-15 12:03:48.555557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.555993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.561 [2024-07-15 12:03:48.556419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.556989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.557031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.557070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.557787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.557840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.557902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.557951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.557998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.558962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.559995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.560040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.560087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.860 [2024-07-15 12:03:48.560128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.560988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.561959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.562972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.563652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.564970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:58.861 [2024-07-15 12:03:48.565220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:58.861 [2024-07-15 12:03:48.565611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.861 [2024-07-15 12:03:48.565749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.565795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.565840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.565889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.565933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.565980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.566993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.567970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.568958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.569998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.862 [2024-07-15 12:03:48.570933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.570977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.571978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.572994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.573976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.574501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.575978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.863 [2024-07-15 12:03:48.576493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.576968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.577961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.578966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.579969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.864 [2024-07-15 12:03:48.580989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.581967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.582978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.583984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.584682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.585979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.586018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.586061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.586106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.586143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.586188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.865 [2024-07-15 12:03:48.586236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.586961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.587979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.588831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.589645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.589698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.589740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.589792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.589840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.589888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.589934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.589981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.590979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.866 [2024-07-15 12:03:48.591972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.592972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.593968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.594775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.595958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.596978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.597026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.597075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.597122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.597168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.597219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.597271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.867 [2024-07-15 12:03:48.597319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.597961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.598977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.599018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.599695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.599738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.599780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.599820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.599864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.599917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.599963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.600992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.601975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.602020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.602065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.868 [2024-07-15 12:03:48.602112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.602977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.603992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.604859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.605967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.606971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.869 [2024-07-15 12:03:48.607424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.607961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:58.870 [2024-07-15 12:03:48.608626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.608980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.609025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.609072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.609761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.609809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.609849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.609888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.609929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.609971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.610990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.611973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.612982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.613021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.870 [2024-07-15 12:03:48.613060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.613966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.614994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.615573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.616968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.617980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.871 [2024-07-15 12:03:48.618600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.618645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.618697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.618744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.618790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.618837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.618881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.618935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.618981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.619989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.620969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.621982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.622029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.622073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.622138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.622182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.622979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.623966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.624004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.624042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.624090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.624137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.872 [2024-07-15 12:03:48.624185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.624954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.625988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.626990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.627991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.873 [2024-07-15 12:03:48.628483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.628529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.628581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.628626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.628673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.628717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.628765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.628797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.628840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.629967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.630965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.631987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.632033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.632074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.632119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.632162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.632924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.632970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.633965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.874 [2024-07-15 12:03:48.634438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.634967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.635981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.636991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.637993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.638769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.875 [2024-07-15 12:03:48.639647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.639687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.639737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.639777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.639817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.639854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.639901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.639955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.640993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.641979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.642021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.642825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.642876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.642926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.642974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.643975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.644976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.645014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.645055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.645100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.645142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.645193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.876 [2024-07-15 12:03:48.645237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.645992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.646577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.647974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.648960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.877 [2024-07-15 12:03:48.649841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.649888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.649934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.650987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.651969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.652860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.653701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.653755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.653805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.653861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.653910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.653960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.654981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.878 [2024-07-15 12:03:48.655407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.655998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.656987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.657972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.658960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.659964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.879 [2024-07-15 12:03:48.660737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.660934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.660982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:58.880 [2024-07-15 12:03:48.661239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.661980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.662992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.663680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.664973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.665960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.666006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.666049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.666096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.666147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.666193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.666245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.880 [2024-07-15 12:03:48.666294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.666981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.667999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.668984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.669961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.881 [2024-07-15 12:03:48.670942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.670995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.671951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.672986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.673822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.674991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.675978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.676019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.676060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.676101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.676140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.676185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.676233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.676273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.676313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.882 [2024-07-15 12:03:48.676355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.676973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.677958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.678005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.678055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.678105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.678811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.678858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.678902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.678943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.678991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.679986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.680999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.681969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.682012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.682063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.682102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.883 [2024-07-15 12:03:48.682148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.682995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.683965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.684698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.685998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.686992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.884 [2024-07-15 12:03:48.687670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.687717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.687766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.687813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.687859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.687905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.687954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.688979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.689984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.690956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.691986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.885 [2024-07-15 12:03:48.692459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.692504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.692552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.692592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.692636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.692678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.692857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.692903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.692950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.692992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.693998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.694962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.695992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.696981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.886 [2024-07-15 12:03:48.697840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.697889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.697939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.697987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.698968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.699013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.699056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.699103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.699149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.699815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.699864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.699905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.699955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.699996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.700958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.701979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.702988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.887 [2024-07-15 12:03:48.703649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.703708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.703754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.703804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.703852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.703898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.703946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.704973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.705833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.706700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.706748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.706790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.706831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.706874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.706915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.706959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.707990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.708959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.888 [2024-07-15 12:03:48.709386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.709968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.710998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.711964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.712604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:58.889 [2024-07-15 12:03:48.713480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.713998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.889 [2024-07-15 12:03:48.714972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.715965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.716943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.717958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.718964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.890 [2024-07-15 12:03:48.719440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.719481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.720975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.721998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.722958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.723965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.724968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.891 [2024-07-15 12:03:48.725626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.725671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.725717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.725765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.725821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.725868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.725912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.725957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.726966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.727969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.728992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.729970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.730431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.731386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.731430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.731471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.892 [2024-07-15 12:03:48.731510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.731995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.732990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.733964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.734966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.735982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.893 [2024-07-15 12:03:48.736554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.736986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.737027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.737067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.737105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.737143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.737181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.737232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.737276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.737322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 true 00:13:58.894 [2024-07-15 12:03:48.738804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.738971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.739981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.740957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.894 [2024-07-15 12:03:48.741656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.741700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.741744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.741790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.741840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.741885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.741936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.741985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.742980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.743987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.744960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.745963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.746958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.895 [2024-07-15 12:03:48.747004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.747982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.748022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.748063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.748104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.748148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.748196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.748245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.748289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.749975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.750958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.751958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.896 [2024-07-15 12:03:48.752677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.752723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.752772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.752815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.752856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.752901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.753968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.754992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.755955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.756982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.897 [2024-07-15 12:03:48.757826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.757867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.757919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.757967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.758011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.758058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.758109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.758159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.758208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.758259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.758309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.759984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.760961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.761855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.898 [2024-07-15 12:03:48.762778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.762820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.762861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.762901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.762945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.762984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.763995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:58.899 [2024-07-15 12:03:48.764140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.899 [2024-07-15 12:03:48.764538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.764863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:58.899 [2024-07-15 12:03:48.765668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.765707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.765747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.765790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.765839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.765879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.765921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.765971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.766992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.767955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.768003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.768049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.768091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.899 [2024-07-15 12:03:48.768139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.768983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.769986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.770958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.771993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.772969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.900 [2024-07-15 12:03:48.773891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.773939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.773987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.774958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.775754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.776987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.777999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.778050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.778094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.778142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.778188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.778236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.778289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.778337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.901 [2024-07-15 12:03:48.778387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.778961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.779974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.780952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.781961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.902 [2024-07-15 12:03:48.782385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.782425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.783999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.784957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.785972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.786973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.903 [2024-07-15 12:03:48.787834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.787882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.787929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.787973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.788998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.789042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.789083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.789846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.789895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.789938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.789981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.790979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.791972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.792995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.904 [2024-07-15 12:03:48.793466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.793971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.794969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.795727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.796972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.797978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.905 [2024-07-15 12:03:48.798944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.798980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.799960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.800967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.801999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.802042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.802088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.802131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.802170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.802222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.802261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.802305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.802344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.803960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.906 [2024-07-15 12:03:48.804662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.804702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.804749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.804790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.804823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.804864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.804907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.804950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.804998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.805888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.806815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.807977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.808956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.809000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.809051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.809097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.907 [2024-07-15 12:03:48.809140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.809959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.810958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.811980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.812993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.813041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.813863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.813911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.813948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.813989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.908 [2024-07-15 12:03:48.814746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.814787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.814828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.814873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.814912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.814949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.814995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.815961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.816953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.817542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:58.909 [2024-07-15 12:03:48.818012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.818999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.819970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.909 [2024-07-15 12:03:48.820014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.820873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.821960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:58.910 [2024-07-15 12:03:48.822538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.822586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.822633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.822680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.822730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.822778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.822825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.822873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.822925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.822980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.823031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.823724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.823780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.823828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.823875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.823918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.823953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.823992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.824981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.825987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.225 [2024-07-15 12:03:48.826905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.826947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.826992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.827987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.828977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.829021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.829067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.829118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.829162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.829206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.829257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.830957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.831991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.832036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.832073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.226 [2024-07-15 12:03:48.832114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.832998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.833697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.834958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.835966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.836831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.837019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.837063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.837111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.837159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.837206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.837264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.837311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.837358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.227 [2024-07-15 12:03:48.837401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.837994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.838965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.839644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.840980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.841984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.228 [2024-07-15 12:03:48.842622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.842676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.842723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.842769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.842815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.842864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.842909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.842961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.843964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.844964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.845955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.229 [2024-07-15 12:03:48.846867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.846916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.846964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.847986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.848980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.849507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.850959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.851956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.230 [2024-07-15 12:03:48.852433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.852964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.853962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.854961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.855956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.856002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.856050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.856849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.856891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.856934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.856977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.231 [2024-07-15 12:03:48.857902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.857946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.857992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.858960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.859976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.860566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.861998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.862990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.863029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.863069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.863116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.863156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.863201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.863238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.863280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.863322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.232 [2024-07-15 12:03:48.863361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.863898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.864982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.865957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.866979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.233 [2024-07-15 12:03:48.867939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.867988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.868958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.869990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.870033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.870081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.870121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.870161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.870212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.870254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.870295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:59.234 [2024-07-15 12:03:48.870338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.870377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.870423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.871968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.872950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.234 [2024-07-15 12:03:48.873589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.873640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.873687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.873730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.873780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.873837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.873884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.873932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.873983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.874987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.875978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.876029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.876074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.876124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.876174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.876227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.876277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.876980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.877991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.878963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.879008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.879055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.235 [2024-07-15 12:03:48.879105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.879826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.880975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.881956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.882956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.883968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.236 [2024-07-15 12:03:48.884490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.884532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.884572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.884617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.884659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.884709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.884750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.884786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.884825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.885954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.886003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.886050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.886095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.886141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.886191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.886242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.886293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.886341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.887961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.888992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.237 [2024-07-15 12:03:48.889575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.889621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.889669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.889718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.889774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.889823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.889869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.889918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.890968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.891970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.892877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.893686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.893746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.893793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.893841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.893891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.893937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.893985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.894997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.238 [2024-07-15 12:03:48.895043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.895966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.896979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.897956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.898960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.899952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.239 [2024-07-15 12:03:48.900493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.900540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.900589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.900632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.900676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.900723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.900756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.900795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.900837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.900880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.901989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.902995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.903048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.903745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.903800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.903855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.903901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.903944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.903991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.904979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.905968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.240 [2024-07-15 12:03:48.906014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.906994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.907992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.908968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.909604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.910985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.241 [2024-07-15 12:03:48.911723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.911775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.911820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.911865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.911910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.911961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.912977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.913987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.914999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.915046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.915094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.915140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.915189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.915243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.915291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.915338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.242 [2024-07-15 12:03:48.915388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.915996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.916044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.916084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.916130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.916173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.916217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.916256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.916302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.916345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.917958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.918986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.919964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.920988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.921031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.921444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.243 [2024-07-15 12:03:48.921490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.921962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.922962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.923961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:59.244 [2024-07-15 12:03:48.924773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.924996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.925988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.926043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.926084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.926130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.926175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.926222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.926275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.244 [2024-07-15 12:03:48.926323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.926371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.926417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.926470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.927960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.928948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.929996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.930999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 [2024-07-15 12:03:48.931727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.245 12:03:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.246 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.246 [2024-07-15 12:03:49.137320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.137968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.138993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.139949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.140955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.141000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.141044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.141087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.141127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.141163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.246 [2024-07-15 12:03:49.141195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.141999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.142036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.142083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.142962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.143996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.144983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.145969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.146556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.147052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.147102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.247 [2024-07-15 12:03:49.147149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.147971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.148970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.149972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.150985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.248 [2024-07-15 12:03:49.151718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.151752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.151788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.151833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.151872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.151915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.151955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.152548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.153988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.154988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.155997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 Message suppressed 999 times: [2024-07-15 12:03:49.156460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 Read completed with error (sct=0, sc=15) 00:13:59.249 [2024-07-15 12:03:49.156507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.156961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.157004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.157053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.157098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.157606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.157659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.157708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.249 [2024-07-15 12:03:49.157749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.157782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.157827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.157871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.157912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.157956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.157996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.158968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.159965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.160990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.250 [2024-07-15 12:03:49.161967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.162982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.163023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.163066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.163108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.163146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.163187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.163233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.163275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.163318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.163367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.164962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.165979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.166989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.167037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.167234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.167280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.167325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.167369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.167424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.167470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.167513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.251 [2024-07-15 12:03:49.167555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.167604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.167652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.167703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.167753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.167799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.167845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.167904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.167946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.167994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 12:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:59.252 [2024-07-15 12:03:49.168723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.168967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 12:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:59.252 [2024-07-15 12:03:49.169064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.169709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.170989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.171993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.252 [2024-07-15 12:03:49.172921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.172970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.173980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.174993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.175994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.176027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.176066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.176105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.176153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.176194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.176238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.176279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.176319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.176364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.177971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.253 [2024-07-15 12:03:49.178647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.178688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.178725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.178769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.178810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.178854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.178901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.178943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.178988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.179996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.180959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.181914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.182987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.254 [2024-07-15 12:03:49.183852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.183902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.183950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.183998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.184972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.185959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.186440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.187975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.255 [2024-07-15 12:03:49.188617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.188658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.188701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.188746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.188788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.188830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.188870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.188911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.188954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.188999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.189993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.190987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.191990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.192986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.193803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.193854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.193898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.193945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.193991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.194038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.194086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.194134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.194183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.194233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.194283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.256 [2024-07-15 12:03:49.194317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.194954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.195986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.196994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.197980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.198976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.257 [2024-07-15 12:03:49.199568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.199613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.199662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.199707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.199754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.199800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.199848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.199894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.199946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.199993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.200850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.201992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.202978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.203729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.203775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.203818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.203858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.203908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.203949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.203987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.258 [2024-07-15 12:03:49.204538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.204585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.204632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.204681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.204730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.204783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.204833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.204881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.204932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.204980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.205950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.206998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:59.539 [2024-07-15 12:03:49.207297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.207970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.539 [2024-07-15 12:03:49.208663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.208701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.208741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.208779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.208822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.208860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.208897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.208933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.208975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.209550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.210959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.211972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.212979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.213973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.214017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.214057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.214573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.214626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.214674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.540 [2024-07-15 12:03:49.214724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.214769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.214814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.214862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.214907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.214952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.215965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.216995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.217973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.218974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.541 [2024-07-15 12:03:49.219687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.220955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.221961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.222960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.223856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.224996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.542 [2024-07-15 12:03:49.225515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.225985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.226990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.227975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.228993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.543 [2024-07-15 12:03:49.229880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.229923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.229970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.230959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.231992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.232993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.233904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.234968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.544 [2024-07-15 12:03:49.235454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.235987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.236974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.237962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.238987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.239990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.545 [2024-07-15 12:03:49.240798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.240842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.240884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.240928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.240974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.241989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.242969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.243978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.244627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.245994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.546 [2024-07-15 12:03:49.246472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.246974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.247951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.248988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.249981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.250965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.251012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.251057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.251104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.251148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.547 [2024-07-15 12:03:49.251197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.251962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.252986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.253964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.254980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.255023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.255073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.255114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.255156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.255197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.255246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.255289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.255332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.255374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.548 [2024-07-15 12:03:49.256814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.256862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.256916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.256964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.257968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.258962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 Message suppressed 999 times: [2024-07-15 12:03:49.259841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 Read completed with error (sct=0, sc=15) 00:13:59.549 [2024-07-15 12:03:49.259882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.259974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.260989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.549 [2024-07-15 12:03:49.261708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.261747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.261795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.261839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.261882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.261928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.261966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.262961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.263973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.264975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.265591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.266977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.550 [2024-07-15 12:03:49.267964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.268964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.269988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.270970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.271992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.551 [2024-07-15 12:03:49.272544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.272968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.273984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.274978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.275994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.276032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.276072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.276115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.276162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.276212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.276260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.276308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.276357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.276405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.277992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.278036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.278079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.278111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.552 [2024-07-15 12:03:49.278155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.278973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.279986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.280954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.281989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.282978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.553 [2024-07-15 12:03:49.283513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.283996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.284962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.285996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.286970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.287022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.287064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.287109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.287151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.287191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.287244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.287289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.287342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.288972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.289019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.289054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.289098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.289139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.289181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.289233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.289280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.554 [2024-07-15 12:03:49.289327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.289977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.290972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.291978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.292966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.293991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.294035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.555 [2024-07-15 12:03:49.294079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.294991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.295965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.296988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.297966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.556 [2024-07-15 12:03:49.298445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.298492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.298543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.298594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.298640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.298683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.298736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.298796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.298987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.299977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.300984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.301822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.302646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.302699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.302745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.302792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.302841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.302886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.302938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.302987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.557 [2024-07-15 12:03:49.303685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.303728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.303769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.303816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.303859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.303906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.303948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.303989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.304999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.305963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.306975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.307986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.558 [2024-07-15 12:03:49.308625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.308673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.308721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.308772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.308824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.308878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.308930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.308974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.309789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.310981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.311972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.312583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.312634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.312679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:59.559 [2024-07-15 12:03:49.312719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.312765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.312809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.312858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.312907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.312957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.559 [2024-07-15 12:03:49.313849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.313899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.313945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.313991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.314956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.315996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.316480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.317995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.318040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.318073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.318118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.318161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.318209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.560 [2024-07-15 12:03:49.318255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.318982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.319981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.320991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.321993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.561 [2024-07-15 12:03:49.322871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.322917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.322965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.323013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.323061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.323110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.323951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.324995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.325960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.326842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.327785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.562 [2024-07-15 12:03:49.328963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.329962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.330962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.331997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.332950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.333001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.333042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.333091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.333124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.333168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.333209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.333254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.333920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.333971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.563 [2024-07-15 12:03:49.334557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.334604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.334648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.334697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.334747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.334798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.334848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.334893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.334940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.334986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.335996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.336971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.337660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.338981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 true 00:13:59.564 [2024-07-15 12:03:49.339187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.564 [2024-07-15 12:03:49.339991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.340977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.341975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.342977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.343994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.565 [2024-07-15 12:03:49.344035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.344076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.344114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.344926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.344985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.345994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.346978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.347988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.348714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.349966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.566 [2024-07-15 12:03:49.350011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.350995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.351942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.352965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.353985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.354033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.354084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.354861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.354905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.354949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.354990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.567 [2024-07-15 12:03:49.355606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.355651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.355696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.355743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.355793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.355841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.355886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.355931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.355975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.356999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.357820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.358985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.359414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.360970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.361018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.361054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.361097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.361140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.361186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.568 [2024-07-15 12:03:49.361233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.361994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 12:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:13:59.569 [2024-07-15 12:03:49.362451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 12:03:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.569 [2024-07-15 12:03:49.362791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.362983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.363989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.364989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.365029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.365073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.365118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.365160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.569 [2024-07-15 12:03:49.365202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.365957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:59.570 [2024-07-15 12:03:49.366766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.366819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.366875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.366923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.366970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.367981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.368994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.369960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.370973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.371023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.371065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.371097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.371140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.371179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.371229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.570 [2024-07-15 12:03:49.371272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.371953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.372979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.373973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.374980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.375978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.376010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.376053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.376798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.376840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.571 [2024-07-15 12:03:49.376880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.376923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.376964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.377984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.378990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.379978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.380601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.381986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.382031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.382062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.382103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.382147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.382186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.382236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.572 [2024-07-15 12:03:49.382279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.382990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.383937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.384989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.385972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.573 [2024-07-15 12:03:49.386018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.386900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.387731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.387784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.387830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.387879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.387925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.387972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.388976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.389992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:59.574 [2024-07-15 12:03:49.390712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:00.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.512 12:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.770 12:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:00.770 12:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:01.028 true 00:14:01.028 12:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:01.028 12:03:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.028 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.287 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:01.287 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:01.546 true 00:14:01.546 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:01.546 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.546 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.815 [2024-07-15 12:03:51.719925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.719995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.720957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.721953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.722000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.722045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.722093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.722138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.815 [2024-07-15 12:03:51.722187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.722965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.723618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.724991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.725982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.726842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.816 [2024-07-15 12:03:51.727735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.727780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.727827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.727874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.727920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.727966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.728998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.729994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.730833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.730882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.730924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.730968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.731975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.732975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.733017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.733058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.817 [2024-07-15 12:03:51.733104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.733970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.734541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.735970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.736986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.737971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.818 [2024-07-15 12:03:51.738556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.738605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.738657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.738703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.738749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.738797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.738831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.738874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.738917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.738960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.739989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.740737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.741996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.819 [2024-07-15 12:03:51.742982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.743997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.744978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.745966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.746976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.747988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.748040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:01.820 [2024-07-15 12:03:51.748092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.748146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.820 [2024-07-15 12:03:51.748186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:01.821 [2024-07-15 12:03:51.748507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.748979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.749973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.750963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.751579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.752976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.821 [2024-07-15 12:03:51.753746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.753792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.753842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.753890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.753942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.753990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.754997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.755975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.756958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.757992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.758966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.759006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.759046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.759088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.822 [2024-07-15 12:03:51.759130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.759972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.760999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.761801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.762977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.763954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.764003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.764049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.764099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.764143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.764190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.764252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.764302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.764350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.823 [2024-07-15 12:03:51.764400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.764979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.765965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:01.824 [2024-07-15 12:03:51.766712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.766958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.767995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.768962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.769000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.769051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.769095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.824 [2024-07-15 12:03:51.769143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.769973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.770965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.771969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.772504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.773990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.825 [2024-07-15 12:03:51.774835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.774881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.774923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.774966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.775958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.776973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.777974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.778975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.779994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.780042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.826 [2024-07-15 12:03:51.780086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.780987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.781965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.782498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.783958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.784962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.827 [2024-07-15 12:03:51.785590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.785643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.785696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.785744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.785788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.785837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.785882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.785937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.785984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.786990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.787967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.788954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.789006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.789054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.789859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.789910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.789958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.828 [2024-07-15 12:03:51.790545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.790967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.791965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.792963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.793638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.794983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.829 [2024-07-15 12:03:51.795798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.795846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.795899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.795947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.795994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.796889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.797990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.798972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.799024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.799074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.799125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.799911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.799965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.800971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.830 [2024-07-15 12:03:51.801394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.801966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.802999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.803991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.804980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.805612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:01.831 [2024-07-15 12:03:51.806861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.806909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.806952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.806986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.807958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.808956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.809006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.809058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.809103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.809151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.809192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.809234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.809422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.809469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.116 [2024-07-15 12:03:51.809512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.809959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.810982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.811683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.811735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.811788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.811838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.811884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.811928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.811974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.812979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.813988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.814956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.117 [2024-07-15 12:03:51.815005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.815902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.816992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.817983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.818977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.819968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.820012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.820057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.820097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.820133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.820176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.118 [2024-07-15 12:03:51.820217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.820263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.118 [2024-07-15 12:03:51.820309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.820971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.821994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.822037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.822081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.822126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.822166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.822945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.822992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.823994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.824987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.119 [2024-07-15 12:03:51.825950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.825990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.826637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.827962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.828999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.120 [2024-07-15 12:03:51.829845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.830963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.831983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.832034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.832069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.832108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.832151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.832964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.833982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.834980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.121 [2024-07-15 12:03:51.835659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.835700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.835740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.835782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.835824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.835985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.836694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.837970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.838973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.839960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.840958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.841004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.841048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.122 [2024-07-15 12:03:51.841093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.841995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.842972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.843779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.843829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.843882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.843930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.843977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.844989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.845962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.123 [2024-07-15 12:03:51.846544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.846592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.846779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.846830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.846878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.846933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.846981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.847963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.848951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.849953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.850762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.124 [2024-07-15 12:03:51.851494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.851976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.852962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.853942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.854841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.854889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.854933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.854979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.855993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.856973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.857016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.125 [2024-07-15 12:03:51.857058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.857986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.858953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.859952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.860876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.861997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.126 [2024-07-15 12:03:51.862635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.862687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.862736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.862782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.862831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.862882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.862929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.862977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.863976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.864012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.864056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.864100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.864142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.864192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.864236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.864285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.865959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.866961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.867984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.868024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.868068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.868269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.868310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.868354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.127 [2024-07-15 12:03:51.868396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.868965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.869963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.870988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.871965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.872936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.128 [2024-07-15 12:03:51.872976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.128 [2024-07-15 12:03:51.873793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.873846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.873902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.873947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.873994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.874989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.875037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.875084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.875132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.875179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.875229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.875276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.875325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.875372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.876953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.877993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.129 [2024-07-15 12:03:51.878579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.878620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.878669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.878715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.878766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.878807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.878850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.878891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.878934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.878977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.879963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.880965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.881979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.882998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.883985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.884032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.884079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.130 [2024-07-15 12:03:51.884119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.884982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.885976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.886470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.887998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.888973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.131 [2024-07-15 12:03:51.889751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.889798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.889840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.889886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.889931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.889974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.890970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.891977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.892956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.893990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.894961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.895011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.895061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.895110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.895163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.895216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.895267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.132 [2024-07-15 12:03:51.895314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.895988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.896714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.897976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.898977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.899996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.900038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.900071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.900115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.900155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.900197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.900245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.133 [2024-07-15 12:03:51.900295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.900984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.901967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.902969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.903620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.904971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.134 [2024-07-15 12:03:51.905890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.905931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.905973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.906996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.907818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.907870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.907920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.907967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.908978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.909967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.910976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.911626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.135 [2024-07-15 12:03:51.912022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.912977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.913991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.914839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.915988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.136 [2024-07-15 12:03:51.916930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.916977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.917853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.918727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.918783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.918828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.918885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.918930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.918974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.919991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.920995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.137 [2024-07-15 12:03:51.921758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.921802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.921856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.921905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.921953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.921998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.922979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.923999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 true 00:14:02.138 [2024-07-15 12:03:51.924132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.924996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.925960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.138 [2024-07-15 12:03:51.926243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.926960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.927002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.927048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.138 [2024-07-15 12:03:51.927089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.927990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.928711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.929595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.929646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.929690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.929731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.929768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.929804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.929853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.929914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.929965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.930961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.931965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.139 [2024-07-15 12:03:51.932938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.932986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.933966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.934973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.935443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.936973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.937981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.938031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.938074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.938118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.938168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.938219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.938270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.938316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.140 [2024-07-15 12:03:51.938362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.938989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.939961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.940958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.941981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.942993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.943032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.943073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.943122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.141 [2024-07-15 12:03:51.943163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.943967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.944968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.945987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.946467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.947962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.142 [2024-07-15 12:03:51.948964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:02.143 [2024-07-15 12:03:51.949421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 12:03:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.143 [2024-07-15 12:03:51.949814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.949989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.950998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.951965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.952960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.953990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.954029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.954078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.954123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.954170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.954219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.954271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.954319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.143 [2024-07-15 12:03:51.954369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.954966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.955990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.956705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.957969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.958953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.144 [2024-07-15 12:03:51.959945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.959994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.960972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.961991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.962991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.963957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.964959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.145 [2024-07-15 12:03:51.965387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.965971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.966946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.967637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.967688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.967738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.967787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.967838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.967881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.967914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.967956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.968964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.969968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.970016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.970063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.970108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.146 [2024-07-15 12:03:51.970155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.970960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.971967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.972984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.973032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.973078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.973125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.973169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.973220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.973275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.973322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.973370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.974997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.147 [2024-07-15 12:03:51.975758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.975798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.975839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.975886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.975944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.975992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.976999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.148 [2024-07-15 12:03:51.977687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.977987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.978970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.979956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.980000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.980778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.980828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.980864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.980906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.980948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.980989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.981030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.981073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.981115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.981158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.981204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.981251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.981290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.981333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.148 [2024-07-15 12:03:51.981373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.981994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.982975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.983965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.984948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.985957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.149 [2024-07-15 12:03:51.986736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.986786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.986828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.986870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.986912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.986956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.986997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.987995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.988987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.989968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.150 [2024-07-15 12:03:51.990007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.990054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.990093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.990770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.990818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.990857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.990896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.990939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.990980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.991986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.992948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.993992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.994503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.151 [2024-07-15 12:03:51.995979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.996991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.997859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.998993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:51.999983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.000983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.001023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.001064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.001104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.001147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.001188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.001237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.152 [2024-07-15 12:03:52.001282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.001970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.002990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.003960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.004007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.004055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.004103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.004149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.004196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.004250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.004969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.005969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.153 [2024-07-15 12:03:52.006821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.006862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.006908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.006947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.006988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.007965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.008995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.009992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.154 [2024-07-15 12:03:52.010538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.010582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.010627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.010687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.010735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.011995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.012986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.013974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.014977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.015971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.016012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.016044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.016084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.155 [2024-07-15 12:03:52.016121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.016982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.017990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.018973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.019988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.156 [2024-07-15 12:03:52.020503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.020548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.020593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.020640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.020690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.021973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.022990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.023952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.024974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.025971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.157 [2024-07-15 12:03:52.026554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.026975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.027981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.028959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.158 [2024-07-15 12:03:52.029454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.029992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.030766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.159 [2024-07-15 12:03:52.031295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.031992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.032959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.159 [2024-07-15 12:03:52.033809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.033848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.033887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.033930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.033971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.034972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.035015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.035740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.035791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.035839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.035886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.035933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.035979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.036985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.037965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.038984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.160 [2024-07-15 12:03:52.039478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.039961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.040986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.041588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.042958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.043990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.044967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.161 [2024-07-15 12:03:52.045012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.045968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.046988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.047988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.048960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.049006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.049050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.049095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.049145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.049192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.049242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.049292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.049349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.162 [2024-07-15 12:03:52.049399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.049447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.049495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.049545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.049593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.049787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.049835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.049882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.049930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.049979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.050979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.051975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.052589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.053981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.163 [2024-07-15 12:03:52.054858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.054898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.054939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.054982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.055980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.056966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.057997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.164 [2024-07-15 12:03:52.058617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.058663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.058712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.058758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.058805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.058857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.058904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.058955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.059985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.060990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.061963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.062787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.063998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.165 [2024-07-15 12:03:52.064043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.064993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.065974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.066979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.067960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.166 [2024-07-15 12:03:52.068756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.068808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.068854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.068900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.068951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.068998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.069963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.070954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.071967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.072940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.073465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.073511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.073553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.073598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.073640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.073683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.167 [2024-07-15 12:03:52.073731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.073777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.073822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.073870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.073917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.073965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.074992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.075993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.076993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.077033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.077076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.077119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.077167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.077214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.077266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.077990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.168 [2024-07-15 12:03:52.078905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.078946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.078994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.079972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.080825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.081994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.082045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.169 [2024-07-15 12:03:52.082090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.082978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.083844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.170 [2024-07-15 12:03:52.084663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.084714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.084764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.084813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.084861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.084914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.084963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.085989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.086979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.170 [2024-07-15 12:03:52.087462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.087513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.087562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.087751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.087799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.087849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.087902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.087956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.088543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.089965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.090996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.091817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.171 [2024-07-15 12:03:52.092998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.172 [2024-07-15 12:03:52.093048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.172 [2024-07-15 12:03:52.093095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.172 [2024-07-15 12:03:52.093141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.093971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.094022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.094056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.094102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.094142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.094182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.094234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.094279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.460 [2024-07-15 12:03:52.094319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.094881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.095774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.095837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.095892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.095936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.095984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.096963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.097982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.098983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.099954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.100004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.100054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.100104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.461 [2024-07-15 12:03:52.100154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.100954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.101773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.102605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.102657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.102702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.102751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.102800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.102849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.102900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.102947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.102995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.103978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.104960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.462 [2024-07-15 12:03:52.105872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.105918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.105967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.106981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.107973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.108969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.109993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.110997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.111047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.463 [2024-07-15 12:03:52.111093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.111973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.112853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.113670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.113719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.113760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.113801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.113840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.113888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.113931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.113967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.114983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.115964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.116011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.116056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.116109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.116157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.464 [2024-07-15 12:03:52.116204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.116978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.117962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.118978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.119986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.466 [2024-07-15 12:03:52.120558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.120598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.120637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.120836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.120888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.120936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.120984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.121967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.122893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.467 12:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.467 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.467 [2024-07-15 12:03:52.327921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.327986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.328982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.467 [2024-07-15 12:03:52.329615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.329659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.329701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.329734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.329777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.329817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.329854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.329898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.329940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.329983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.330984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.331991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.332973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.333663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.334978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.335024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.335071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.468 [2024-07-15 12:03:52.335116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.335967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.336975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.337983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.338977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.339965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.340014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.340062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.340111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.340151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.340196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.340241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.340286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.469 [2024-07-15 12:03:52.340326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.340994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.341989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.342999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.343965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.470 [2024-07-15 12:03:52.344415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.344461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.344493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.344536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.345995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.346997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.347868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.348699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.348751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.348801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.348850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.348896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.348942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.348991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.349993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.471 [2024-07-15 12:03:52.350463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.350982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.351971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.352974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.353985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.354971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.472 [2024-07-15 12:03:52.355733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.355932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.355978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.356989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.357980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 12:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:02.473 [2024-07-15 12:03:52.358624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.358671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 12:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:02.473 [2024-07-15 12:03:52.359479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.359981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.360967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.361011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.361053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.361095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.361139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.473 [2024-07-15 12:03:52.361185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.361965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.362969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.363997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.364964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.474 [2024-07-15 12:03:52.365881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.365930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.365977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.366998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 Message suppressed 999 times: [2024-07-15 12:03:52.367873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 Read completed with error (sct=0, sc=15) 00:14:02.475 [2024-07-15 12:03:52.367921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.367971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.368956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.369617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.370966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.475 [2024-07-15 12:03:52.371374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.371974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.372977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.373968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.374959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.375950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.476 [2024-07-15 12:03:52.376818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.376850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.376892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.376938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.376977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.377982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.378967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.379991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.380513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.381991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.382037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.382085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.382134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.382183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.382236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.382284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.382330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.477 [2024-07-15 12:03:52.382378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.382964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.383960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.384989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.385976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.478 [2024-07-15 12:03:52.386830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.386877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.386918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.386950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.386990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.387037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.387077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.387117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.387167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.387209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.387252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.387294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.387342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.388971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.389962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.390982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.479 [2024-07-15 12:03:52.391838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.391883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.391932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.391982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.392974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.393975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.394986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.395979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.396968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.397011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.397051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.397094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.480 [2024-07-15 12:03:52.397128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.397975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.398019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.398059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.398111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.398152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.398193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.398242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.399979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.400971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.401918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.481 [2024-07-15 12:03:52.402786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.402837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.402888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.403963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.404955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.405993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.406995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.482 [2024-07-15 12:03:52.407989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.408951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.409000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.409046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.409096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.409141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.409952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.409997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.410960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.411961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.412004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.412052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.412108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.412151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.483 [2024-07-15 12:03:52.412200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.412961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.413542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.414964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.415970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.416843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.484 [2024-07-15 12:03:52.417510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.417982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.418974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.419016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.419053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.419100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.485 [2024-07-15 12:03:52.419848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.419893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.419934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.419980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.420999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.421984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.422982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.485 [2024-07-15 12:03:52.423018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.423569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.424968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.425985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.426848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.427960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.428008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.428055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.428101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.428149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.428195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.428248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.428297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.486 [2024-07-15 12:03:52.428343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.428992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.429989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.430969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.431967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.487 [2024-07-15 12:03:52.432844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.432888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.432946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.432996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.488 [2024-07-15 12:03:52.433474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.434993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.435949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.436961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.437998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.769 [2024-07-15 12:03:52.438447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.438957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.439907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.440698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.440742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.440779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.440820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.440862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.440903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.440945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.440986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.441983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.442980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.443978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.444025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.770 [2024-07-15 12:03:52.444066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.444997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.445959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.446994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.447966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.448966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.449016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.449065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.449111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.449161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.449210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.449258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.449306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.771 [2024-07-15 12:03:52.449353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.449985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.450979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.451999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.452989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.453988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.454029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.454077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.454118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.454160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.454840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.454884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.454925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.454965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.455002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.455038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.455097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.455143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.772 [2024-07-15 12:03:52.455191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.455973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.456987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.457980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.458987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.459031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.773 [2024-07-15 12:03:52.459077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.459902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.460635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.460682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.460723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.460766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.460808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.460852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.460895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.460937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.460982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.461948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.462995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.463976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.464962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.465006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.465059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.774 [2024-07-15 12:03:52.465105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.465977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.466978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.467987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.468968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.775 [2024-07-15 12:03:52.469974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.470690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.776 [2024-07-15 12:03:52.471503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.471984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.472994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.473998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.474987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.475981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.476024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.476057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.776 [2024-07-15 12:03:52.476102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.476962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.477977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.478987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.479997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.480042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.777 [2024-07-15 12:03:52.480086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.480979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.481715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.482978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.483962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.484980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.778 [2024-07-15 12:03:52.485836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.485878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.485920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.485961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.486981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.487999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.488975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.489955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.490988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.491035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.491083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.491127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.491162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.491206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.491254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.779 [2024-07-15 12:03:52.491298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.491907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.492977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.493975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.494995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.495975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.496015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.496056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.496105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.496153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.496863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.496917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.496971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.497023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.497070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.497119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.497167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.497217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.497270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.497320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.780 [2024-07-15 12:03:52.497369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.497991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.498987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.499954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.500970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.781 [2024-07-15 12:03:52.501499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.501966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.502689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.503978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.504961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.505972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.506983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.782 [2024-07-15 12:03:52.507024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.507993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.508963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.509963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.510612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.511968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.783 [2024-07-15 12:03:52.512879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.512921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.512968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.513978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.514993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.515965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.516993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.784 [2024-07-15 12:03:52.517922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.517973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.518978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.519965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.520686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.521993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.785 [2024-07-15 12:03:52.522791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.522837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.522881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.522931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.522981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.523959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.786 [2024-07-15 12:03:52.524944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.524984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.525971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.526444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.526502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.526552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.526607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.526654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.526703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.526752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.786 [2024-07-15 12:03:52.526797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.526847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.526892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.526940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.526984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 true 00:14:02.787 [2024-07-15 12:03:52.527430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.527966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.528984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.529998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.787 [2024-07-15 12:03:52.530524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.530559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.530606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.530653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.530692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.530736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.531963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.532978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.533963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.534962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.788 [2024-07-15 12:03:52.535509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.535554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.535599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.535645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.535689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.535736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.535784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.535832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.535880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.536987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.537981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.538981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.789 [2024-07-15 12:03:52.539895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.539942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.539988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.540972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.541963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.542006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.542052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.542097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.542147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.542193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.542245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.543988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.790 [2024-07-15 12:03:52.544878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.544927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.544975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.545881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.546760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.547963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.791 [2024-07-15 12:03:52.548693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.548744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.548788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.548840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.548887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.548933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.548978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.549984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.550927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 12:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:02.792 [2024-07-15 12:03:52.550971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 12:03:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.792 [2024-07-15 12:03:52.551370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.551985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.552029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.552077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.552125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.552170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.552219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.552268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.552316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.552364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.553125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.553170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.553213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.792 [2024-07-15 12:03:52.553263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.553987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.554976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.555996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.556879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.557949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.558000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.558054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.558106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.558153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.558199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.793 [2024-07-15 12:03:52.558249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.558975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.559997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.560980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.561951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.562604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.794 [2024-07-15 12:03:52.563580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.563619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.563664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.563708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.563753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.563796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.563841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.563880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.563924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.563965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.564960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.565872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.566826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.567998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.568983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.569028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.569066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.569106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.569147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.795 [2024-07-15 12:03:52.569195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.569966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.570980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.571968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.572993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.796 [2024-07-15 12:03:52.573038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.573084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.573135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.573182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.573232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.573280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.573331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.574967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.575980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.576964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:02.797 [2024-07-15 12:03:52.577647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.577932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.578996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.797 [2024-07-15 12:03:52.579037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.579975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.580987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.581961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.582963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.798 [2024-07-15 12:03:52.583965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.584955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.585960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.586965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.587457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.588972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.799 [2024-07-15 12:03:52.589906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.589949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.589993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.590954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.591999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.592969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.593953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.800 [2024-07-15 12:03:52.594521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.594989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.595976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.596989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.597965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.598970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.801 [2024-07-15 12:03:52.599815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.599857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.599900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.599937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.599979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.600994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.601667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.602982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.603995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.604041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.604088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.604135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.604185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.604236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.604283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.604331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.604380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:02.802 [2024-07-15 12:03:52.604427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:03.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:03.740 12:03:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.000 12:03:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:04.000 12:03:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:04.000 true 00:14:04.258 12:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:04.258 12:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.258 12:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.517 12:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:04.517 12:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:04.777 true 00:14:04.777 12:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:04.777 12:03:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.711 12:03:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.969 12:03:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:05.969 12:03:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:06.227 true 00:14:06.227 12:03:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:06.227 12:03:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.163 12:03:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.163 12:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:07.163 12:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:07.422 true 00:14:07.422 12:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:07.422 12:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.681 12:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.681 12:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:07.681 12:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:07.939 true 00:14:07.939 12:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:07.939 12:03:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.316 12:03:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.316 12:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:09.316 12:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:09.316 true 00:14:09.316 12:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:09.316 12:03:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.251 12:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.509 12:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:10.509 12:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:10.509 true 00:14:10.509 12:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:10.509 12:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.768 12:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.026 12:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:11.026 12:04:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:11.026 true 00:14:11.285 12:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:11.285 12:04:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.220 12:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.479 12:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:12.479 12:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:12.738 true 00:14:12.738 12:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:12.738 12:04:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.673 12:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.673 12:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:13.673 12:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:13.931 true 00:14:13.931 12:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:13.931 12:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.931 12:04:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.189 12:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:14.189 12:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:14.447 true 00:14:14.447 12:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:14.447 12:04:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.823 12:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.823 12:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:15.823 12:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:15.823 true 00:14:15.823 12:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:15.823 12:04:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:16.759 12:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.018 12:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:17.018 12:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:17.018 true 00:14:17.018 12:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:17.018 12:04:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.277 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.535 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:17.535 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:17.535 Initializing NVMe Controllers 00:14:17.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.535 Controller IO queue size 128, less than required. 00:14:17.536 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.536 Controller IO queue size 128, less than required. 00:14:17.536 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:17.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:17.536 Initialization complete. Launching workers. 00:14:17.536 ======================================================== 00:14:17.536 Latency(us) 00:14:17.536 Device Information : IOPS MiB/s Average min max 00:14:17.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3005.17 1.47 24306.51 1154.56 1011931.49 00:14:17.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14305.82 6.99 8924.80 2736.77 459525.31 00:14:17.536 ======================================================== 00:14:17.536 Total : 17310.99 8.45 11595.05 1154.56 1011931.49 00:14:17.536 00:14:17.536 true 00:14:17.794 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1052148 00:14:17.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1052148) - No such process 00:14:17.794 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1052148 00:14:17.794 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.794 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.053 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:18.053 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:18.053 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:18.053 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.053 12:04:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:18.053 null0 00:14:18.311 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.311 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.311 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:18.311 null1 00:14:18.311 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.311 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.311 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:18.569 null2 00:14:18.569 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.569 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.569 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:18.827 null3 00:14:18.827 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.827 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.827 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:18.827 null4 00:14:18.827 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:18.827 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.827 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:19.086 null5 00:14:19.086 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.086 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.086 12:04:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:19.347 null6 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:19.347 null7 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1057529 1057531 1057532 1057535 1057536 1057538 1057540 1057541 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:19.347 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:19.348 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:19.348 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.348 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.607 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.607 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.607 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.607 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.607 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.607 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.607 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.607 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.867 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.126 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.126 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.126 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.126 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.126 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.126 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.126 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.126 12:04:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.126 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.386 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.386 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.386 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.386 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.386 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.386 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.386 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.386 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.645 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.645 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.645 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.645 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.645 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.645 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.645 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.645 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.646 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.905 12:04:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.164 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.164 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.164 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.164 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.164 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.164 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.164 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.164 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.423 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.682 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.942 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.942 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.942 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.942 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.942 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.942 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.942 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.942 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.233 12:04:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:22.233 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.233 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:22.234 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:22.493 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.493 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.493 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:22.493 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.493 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.493 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:22.493 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.493 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.494 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.754 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.013 12:04:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:23.273 rmmod nvme_tcp 00:14:23.273 rmmod nvme_fabrics 00:14:23.273 rmmod nvme_keyring 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1051661 ']' 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1051661 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1051661 ']' 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1051661 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1051661 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1051661' 00:14:23.273 killing process with pid 1051661 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1051661 00:14:23.273 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1051661 00:14:23.533 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:23.533 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:23.533 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:23.533 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.533 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:23.533 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.533 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.533 12:04:13 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.073 12:04:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:26.073 00:14:26.073 real 0m46.769s 00:14:26.073 user 3m12.169s 00:14:26.073 sys 0m15.639s 00:14:26.073 12:04:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.073 12:04:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.073 ************************************ 00:14:26.073 END TEST nvmf_ns_hotplug_stress 00:14:26.073 ************************************ 00:14:26.073 12:04:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:26.073 12:04:15 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:26.073 12:04:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:26.073 12:04:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.073 12:04:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:26.073 ************************************ 00:14:26.073 START TEST nvmf_connect_stress 00:14:26.073 ************************************ 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:26.073 * Looking for test storage... 00:14:26.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.073 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:26.074 12:04:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:31.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:31.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:31.352 Found net devices under 0000:86:00.0: cvl_0_0 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:31.352 Found net devices under 0000:86:00.1: cvl_0_1 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.352 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:14:31.613 00:14:31.613 --- 10.0.0.2 ping statistics --- 00:14:31.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.613 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:14:31.613 00:14:31.613 --- 10.0.0.1 ping statistics --- 00:14:31.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.613 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1061882 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1061882 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1061882 ']' 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.613 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.613 [2024-07-15 12:04:21.554832] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:14:31.613 [2024-07-15 12:04:21.554878] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.613 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.613 [2024-07-15 12:04:21.611421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.872 [2024-07-15 12:04:21.653370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.872 [2024-07-15 12:04:21.653407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.872 [2024-07-15 12:04:21.653415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.872 [2024-07-15 12:04:21.653421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.872 [2024-07-15 12:04:21.653426] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.872 [2024-07-15 12:04:21.653478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.872 [2024-07-15 12:04:21.653518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.872 [2024-07-15 12:04:21.653519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.872 [2024-07-15 12:04:21.787100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.872 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.873 [2024-07-15 12:04:21.811293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:31.873 NULL1 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1061919 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.873 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.873 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.132 12:04:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.390 12:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.390 12:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:32.390 12:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.390 12:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.390 12:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.649 12:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.649 12:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:32.649 12:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.649 12:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.649 12:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:32.908 12:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.908 12:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:32.908 12:04:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.908 12:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.908 12:04:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.486 12:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.486 12:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:33.486 12:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.486 12:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.486 12:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.744 12:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.744 12:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:33.744 12:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.744 12:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.744 12:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.001 12:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.001 12:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:34.001 12:04:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.001 12:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.001 12:04:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.260 12:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.260 12:04:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:34.260 12:04:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.260 12:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.260 12:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.518 12:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.518 12:04:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:34.518 12:04:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.518 12:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.518 12:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 12:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.085 12:04:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:35.085 12:04:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.085 12:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.085 12:04:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.343 12:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.343 12:04:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:35.343 12:04:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.343 12:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.343 12:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.602 12:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.602 12:04:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:35.602 12:04:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.602 12:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.602 12:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:35.861 12:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.861 12:04:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:35.861 12:04:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.861 12:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.861 12:04:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.119 12:04:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.119 12:04:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:36.119 12:04:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.119 12:04:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.119 12:04:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.687 12:04:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.687 12:04:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:36.687 12:04:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.687 12:04:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.687 12:04:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.946 12:04:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.946 12:04:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:36.946 12:04:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.946 12:04:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.946 12:04:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.205 12:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.205 12:04:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:37.205 12:04:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.205 12:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.205 12:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.464 12:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.464 12:04:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:37.464 12:04:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.464 12:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.464 12:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.031 12:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.031 12:04:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:38.031 12:04:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.031 12:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.031 12:04:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.290 12:04:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.290 12:04:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:38.290 12:04:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.290 12:04:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.290 12:04:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.600 12:04:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.600 12:04:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:38.600 12:04:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.600 12:04:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.600 12:04:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.867 12:04:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.867 12:04:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:38.867 12:04:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.867 12:04:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.867 12:04:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.126 12:04:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.126 12:04:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:39.126 12:04:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.126 12:04:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.126 12:04:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.385 12:04:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.385 12:04:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:39.385 12:04:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.385 12:04:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.385 12:04:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.952 12:04:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.952 12:04:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:39.952 12:04:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.952 12:04:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.952 12:04:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.210 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.210 12:04:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:40.210 12:04:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.210 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.210 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.468 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.468 12:04:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:40.468 12:04:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.468 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.468 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.725 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.725 12:04:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:40.725 12:04:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.725 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.725 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.290 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.290 12:04:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:41.290 12:04:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.290 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.290 12:04:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.547 12:04:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.547 12:04:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:41.547 12:04:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.547 12:04:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.547 12:04:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.804 12:04:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.804 12:04:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:41.804 12:04:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.804 12:04:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.804 12:04:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.061 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.061 12:04:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.061 12:04:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1061919 00:14:42.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1061919) - No such process 00:14:42.061 12:04:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1061919 00:14:42.062 12:04:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:42.062 12:04:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:42.062 12:04:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:42.062 12:04:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:42.062 12:04:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:42.062 12:04:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:42.062 12:04:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:42.062 12:04:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:42.062 12:04:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:42.062 rmmod nvme_tcp 00:14:42.062 rmmod nvme_fabrics 00:14:42.062 rmmod nvme_keyring 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1061882 ']' 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1061882 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1061882 ']' 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1061882 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.062 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1061882 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1061882' 00:14:42.320 killing process with pid 1061882 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1061882 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1061882 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.320 12:04:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.856 12:04:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:44.856 00:14:44.856 real 0m18.750s 00:14:44.856 user 0m39.285s 00:14:44.856 sys 0m8.181s 00:14:44.856 12:04:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.856 12:04:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.856 ************************************ 00:14:44.856 END TEST nvmf_connect_stress 00:14:44.856 ************************************ 00:14:44.856 12:04:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:44.856 12:04:34 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:44.856 12:04:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:44.856 12:04:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.856 12:04:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:44.856 ************************************ 00:14:44.856 START TEST nvmf_fused_ordering 00:14:44.856 ************************************ 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:44.856 * Looking for test storage... 00:14:44.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:44.856 12:04:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:50.134 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:50.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:50.134 Found net devices under 0000:86:00.0: cvl_0_0 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:50.134 Found net devices under 0000:86:00.1: cvl_0_1 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.134 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:50.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:14:50.393 00:14:50.393 --- 10.0.0.2 ping statistics --- 00:14:50.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.393 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:14:50.393 00:14:50.393 --- 10.0.0.1 ping statistics --- 00:14:50.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.393 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1067071 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1067071 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1067071 ']' 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.393 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.393 [2024-07-15 12:04:40.354625] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:14:50.393 [2024-07-15 12:04:40.354667] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.393 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.652 [2024-07-15 12:04:40.424037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.652 [2024-07-15 12:04:40.463487] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.652 [2024-07-15 12:04:40.463528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.652 [2024-07-15 12:04:40.463536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.652 [2024-07-15 12:04:40.463542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.652 [2024-07-15 12:04:40.463547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.652 [2024-07-15 12:04:40.463570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.652 [2024-07-15 12:04:40.587103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.652 [2024-07-15 12:04:40.607286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.652 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.653 NULL1 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.653 12:04:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:50.911 [2024-07-15 12:04:40.659621] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:14:50.911 [2024-07-15 12:04:40.659656] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067096 ] 00:14:50.911 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.170 Attached to nqn.2016-06.io.spdk:cnode1 00:14:51.170 Namespace ID: 1 size: 1GB 00:14:51.170 fused_ordering(0) 00:14:51.170 fused_ordering(1) 00:14:51.170 fused_ordering(2) 00:14:51.170 fused_ordering(3) 00:14:51.170 fused_ordering(4) 00:14:51.170 fused_ordering(5) 00:14:51.170 fused_ordering(6) 00:14:51.170 fused_ordering(7) 00:14:51.170 fused_ordering(8) 00:14:51.170 fused_ordering(9) 00:14:51.170 fused_ordering(10) 00:14:51.170 fused_ordering(11) 00:14:51.170 fused_ordering(12) 00:14:51.170 fused_ordering(13) 00:14:51.170 fused_ordering(14) 00:14:51.170 fused_ordering(15) 00:14:51.170 fused_ordering(16) 00:14:51.170 fused_ordering(17) 00:14:51.170 fused_ordering(18) 00:14:51.170 fused_ordering(19) 00:14:51.170 fused_ordering(20) 00:14:51.170 fused_ordering(21) 00:14:51.170 fused_ordering(22) 00:14:51.170 fused_ordering(23) 00:14:51.170 fused_ordering(24) 00:14:51.170 fused_ordering(25) 00:14:51.170 fused_ordering(26) 00:14:51.170 fused_ordering(27) 00:14:51.170 fused_ordering(28) 00:14:51.170 fused_ordering(29) 00:14:51.170 fused_ordering(30) 00:14:51.170 fused_ordering(31) 00:14:51.170 fused_ordering(32) 00:14:51.170 fused_ordering(33) 00:14:51.170 fused_ordering(34) 00:14:51.170 fused_ordering(35) 00:14:51.170 fused_ordering(36) 00:14:51.170 fused_ordering(37) 00:14:51.170 fused_ordering(38) 00:14:51.170 fused_ordering(39) 00:14:51.170 fused_ordering(40) 00:14:51.170 fused_ordering(41) 00:14:51.170 fused_ordering(42) 00:14:51.170 fused_ordering(43) 00:14:51.170 fused_ordering(44) 00:14:51.170 fused_ordering(45) 00:14:51.170 fused_ordering(46) 00:14:51.170 fused_ordering(47) 00:14:51.170 fused_ordering(48) 00:14:51.170 fused_ordering(49) 00:14:51.170 fused_ordering(50) 00:14:51.170 fused_ordering(51) 00:14:51.170 fused_ordering(52) 00:14:51.170 fused_ordering(53) 00:14:51.170 fused_ordering(54) 00:14:51.170 fused_ordering(55) 00:14:51.170 fused_ordering(56) 00:14:51.170 fused_ordering(57) 00:14:51.170 fused_ordering(58) 00:14:51.170 fused_ordering(59) 00:14:51.170 fused_ordering(60) 00:14:51.170 fused_ordering(61) 00:14:51.170 fused_ordering(62) 00:14:51.170 fused_ordering(63) 00:14:51.170 fused_ordering(64) 00:14:51.170 fused_ordering(65) 00:14:51.170 fused_ordering(66) 00:14:51.170 fused_ordering(67) 00:14:51.170 fused_ordering(68) 00:14:51.170 fused_ordering(69) 00:14:51.170 fused_ordering(70) 00:14:51.170 fused_ordering(71) 00:14:51.170 fused_ordering(72) 00:14:51.170 fused_ordering(73) 00:14:51.170 fused_ordering(74) 00:14:51.170 fused_ordering(75) 00:14:51.170 fused_ordering(76) 00:14:51.170 fused_ordering(77) 00:14:51.170 fused_ordering(78) 00:14:51.170 fused_ordering(79) 00:14:51.170 fused_ordering(80) 00:14:51.170 fused_ordering(81) 00:14:51.170 fused_ordering(82) 00:14:51.170 fused_ordering(83) 00:14:51.170 fused_ordering(84) 00:14:51.170 fused_ordering(85) 00:14:51.170 fused_ordering(86) 00:14:51.170 fused_ordering(87) 00:14:51.170 fused_ordering(88) 00:14:51.170 fused_ordering(89) 00:14:51.170 fused_ordering(90) 00:14:51.170 fused_ordering(91) 00:14:51.170 fused_ordering(92) 00:14:51.170 fused_ordering(93) 00:14:51.170 fused_ordering(94) 00:14:51.170 fused_ordering(95) 00:14:51.170 fused_ordering(96) 00:14:51.170 fused_ordering(97) 00:14:51.170 fused_ordering(98) 00:14:51.170 fused_ordering(99) 00:14:51.170 fused_ordering(100) 00:14:51.170 fused_ordering(101) 00:14:51.170 fused_ordering(102) 00:14:51.170 fused_ordering(103) 00:14:51.170 fused_ordering(104) 00:14:51.170 fused_ordering(105) 00:14:51.170 fused_ordering(106) 00:14:51.170 fused_ordering(107) 00:14:51.170 fused_ordering(108) 00:14:51.170 fused_ordering(109) 00:14:51.170 fused_ordering(110) 00:14:51.170 fused_ordering(111) 00:14:51.170 fused_ordering(112) 00:14:51.170 fused_ordering(113) 00:14:51.170 fused_ordering(114) 00:14:51.170 fused_ordering(115) 00:14:51.170 fused_ordering(116) 00:14:51.170 fused_ordering(117) 00:14:51.170 fused_ordering(118) 00:14:51.170 fused_ordering(119) 00:14:51.170 fused_ordering(120) 00:14:51.170 fused_ordering(121) 00:14:51.170 fused_ordering(122) 00:14:51.170 fused_ordering(123) 00:14:51.170 fused_ordering(124) 00:14:51.170 fused_ordering(125) 00:14:51.170 fused_ordering(126) 00:14:51.170 fused_ordering(127) 00:14:51.170 fused_ordering(128) 00:14:51.170 fused_ordering(129) 00:14:51.170 fused_ordering(130) 00:14:51.170 fused_ordering(131) 00:14:51.170 fused_ordering(132) 00:14:51.170 fused_ordering(133) 00:14:51.170 fused_ordering(134) 00:14:51.170 fused_ordering(135) 00:14:51.170 fused_ordering(136) 00:14:51.170 fused_ordering(137) 00:14:51.170 fused_ordering(138) 00:14:51.170 fused_ordering(139) 00:14:51.170 fused_ordering(140) 00:14:51.170 fused_ordering(141) 00:14:51.170 fused_ordering(142) 00:14:51.170 fused_ordering(143) 00:14:51.171 fused_ordering(144) 00:14:51.171 fused_ordering(145) 00:14:51.171 fused_ordering(146) 00:14:51.171 fused_ordering(147) 00:14:51.171 fused_ordering(148) 00:14:51.171 fused_ordering(149) 00:14:51.171 fused_ordering(150) 00:14:51.171 fused_ordering(151) 00:14:51.171 fused_ordering(152) 00:14:51.171 fused_ordering(153) 00:14:51.171 fused_ordering(154) 00:14:51.171 fused_ordering(155) 00:14:51.171 fused_ordering(156) 00:14:51.171 fused_ordering(157) 00:14:51.171 fused_ordering(158) 00:14:51.171 fused_ordering(159) 00:14:51.171 fused_ordering(160) 00:14:51.171 fused_ordering(161) 00:14:51.171 fused_ordering(162) 00:14:51.171 fused_ordering(163) 00:14:51.171 fused_ordering(164) 00:14:51.171 fused_ordering(165) 00:14:51.171 fused_ordering(166) 00:14:51.171 fused_ordering(167) 00:14:51.171 fused_ordering(168) 00:14:51.171 fused_ordering(169) 00:14:51.171 fused_ordering(170) 00:14:51.171 fused_ordering(171) 00:14:51.171 fused_ordering(172) 00:14:51.171 fused_ordering(173) 00:14:51.171 fused_ordering(174) 00:14:51.171 fused_ordering(175) 00:14:51.171 fused_ordering(176) 00:14:51.171 fused_ordering(177) 00:14:51.171 fused_ordering(178) 00:14:51.171 fused_ordering(179) 00:14:51.171 fused_ordering(180) 00:14:51.171 fused_ordering(181) 00:14:51.171 fused_ordering(182) 00:14:51.171 fused_ordering(183) 00:14:51.171 fused_ordering(184) 00:14:51.171 fused_ordering(185) 00:14:51.171 fused_ordering(186) 00:14:51.171 fused_ordering(187) 00:14:51.171 fused_ordering(188) 00:14:51.171 fused_ordering(189) 00:14:51.171 fused_ordering(190) 00:14:51.171 fused_ordering(191) 00:14:51.171 fused_ordering(192) 00:14:51.171 fused_ordering(193) 00:14:51.171 fused_ordering(194) 00:14:51.171 fused_ordering(195) 00:14:51.171 fused_ordering(196) 00:14:51.171 fused_ordering(197) 00:14:51.171 fused_ordering(198) 00:14:51.171 fused_ordering(199) 00:14:51.171 fused_ordering(200) 00:14:51.171 fused_ordering(201) 00:14:51.171 fused_ordering(202) 00:14:51.171 fused_ordering(203) 00:14:51.171 fused_ordering(204) 00:14:51.171 fused_ordering(205) 00:14:51.429 fused_ordering(206) 00:14:51.429 fused_ordering(207) 00:14:51.429 fused_ordering(208) 00:14:51.429 fused_ordering(209) 00:14:51.429 fused_ordering(210) 00:14:51.429 fused_ordering(211) 00:14:51.429 fused_ordering(212) 00:14:51.429 fused_ordering(213) 00:14:51.429 fused_ordering(214) 00:14:51.429 fused_ordering(215) 00:14:51.429 fused_ordering(216) 00:14:51.429 fused_ordering(217) 00:14:51.429 fused_ordering(218) 00:14:51.429 fused_ordering(219) 00:14:51.429 fused_ordering(220) 00:14:51.429 fused_ordering(221) 00:14:51.429 fused_ordering(222) 00:14:51.429 fused_ordering(223) 00:14:51.429 fused_ordering(224) 00:14:51.429 fused_ordering(225) 00:14:51.429 fused_ordering(226) 00:14:51.429 fused_ordering(227) 00:14:51.429 fused_ordering(228) 00:14:51.429 fused_ordering(229) 00:14:51.429 fused_ordering(230) 00:14:51.429 fused_ordering(231) 00:14:51.429 fused_ordering(232) 00:14:51.429 fused_ordering(233) 00:14:51.429 fused_ordering(234) 00:14:51.429 fused_ordering(235) 00:14:51.429 fused_ordering(236) 00:14:51.429 fused_ordering(237) 00:14:51.429 fused_ordering(238) 00:14:51.429 fused_ordering(239) 00:14:51.429 fused_ordering(240) 00:14:51.429 fused_ordering(241) 00:14:51.429 fused_ordering(242) 00:14:51.429 fused_ordering(243) 00:14:51.429 fused_ordering(244) 00:14:51.429 fused_ordering(245) 00:14:51.429 fused_ordering(246) 00:14:51.429 fused_ordering(247) 00:14:51.429 fused_ordering(248) 00:14:51.429 fused_ordering(249) 00:14:51.429 fused_ordering(250) 00:14:51.429 fused_ordering(251) 00:14:51.429 fused_ordering(252) 00:14:51.429 fused_ordering(253) 00:14:51.429 fused_ordering(254) 00:14:51.429 fused_ordering(255) 00:14:51.429 fused_ordering(256) 00:14:51.429 fused_ordering(257) 00:14:51.429 fused_ordering(258) 00:14:51.429 fused_ordering(259) 00:14:51.429 fused_ordering(260) 00:14:51.429 fused_ordering(261) 00:14:51.429 fused_ordering(262) 00:14:51.429 fused_ordering(263) 00:14:51.429 fused_ordering(264) 00:14:51.429 fused_ordering(265) 00:14:51.429 fused_ordering(266) 00:14:51.429 fused_ordering(267) 00:14:51.430 fused_ordering(268) 00:14:51.430 fused_ordering(269) 00:14:51.430 fused_ordering(270) 00:14:51.430 fused_ordering(271) 00:14:51.430 fused_ordering(272) 00:14:51.430 fused_ordering(273) 00:14:51.430 fused_ordering(274) 00:14:51.430 fused_ordering(275) 00:14:51.430 fused_ordering(276) 00:14:51.430 fused_ordering(277) 00:14:51.430 fused_ordering(278) 00:14:51.430 fused_ordering(279) 00:14:51.430 fused_ordering(280) 00:14:51.430 fused_ordering(281) 00:14:51.430 fused_ordering(282) 00:14:51.430 fused_ordering(283) 00:14:51.430 fused_ordering(284) 00:14:51.430 fused_ordering(285) 00:14:51.430 fused_ordering(286) 00:14:51.430 fused_ordering(287) 00:14:51.430 fused_ordering(288) 00:14:51.430 fused_ordering(289) 00:14:51.430 fused_ordering(290) 00:14:51.430 fused_ordering(291) 00:14:51.430 fused_ordering(292) 00:14:51.430 fused_ordering(293) 00:14:51.430 fused_ordering(294) 00:14:51.430 fused_ordering(295) 00:14:51.430 fused_ordering(296) 00:14:51.430 fused_ordering(297) 00:14:51.430 fused_ordering(298) 00:14:51.430 fused_ordering(299) 00:14:51.430 fused_ordering(300) 00:14:51.430 fused_ordering(301) 00:14:51.430 fused_ordering(302) 00:14:51.430 fused_ordering(303) 00:14:51.430 fused_ordering(304) 00:14:51.430 fused_ordering(305) 00:14:51.430 fused_ordering(306) 00:14:51.430 fused_ordering(307) 00:14:51.430 fused_ordering(308) 00:14:51.430 fused_ordering(309) 00:14:51.430 fused_ordering(310) 00:14:51.430 fused_ordering(311) 00:14:51.430 fused_ordering(312) 00:14:51.430 fused_ordering(313) 00:14:51.430 fused_ordering(314) 00:14:51.430 fused_ordering(315) 00:14:51.430 fused_ordering(316) 00:14:51.430 fused_ordering(317) 00:14:51.430 fused_ordering(318) 00:14:51.430 fused_ordering(319) 00:14:51.430 fused_ordering(320) 00:14:51.430 fused_ordering(321) 00:14:51.430 fused_ordering(322) 00:14:51.430 fused_ordering(323) 00:14:51.430 fused_ordering(324) 00:14:51.430 fused_ordering(325) 00:14:51.430 fused_ordering(326) 00:14:51.430 fused_ordering(327) 00:14:51.430 fused_ordering(328) 00:14:51.430 fused_ordering(329) 00:14:51.430 fused_ordering(330) 00:14:51.430 fused_ordering(331) 00:14:51.430 fused_ordering(332) 00:14:51.430 fused_ordering(333) 00:14:51.430 fused_ordering(334) 00:14:51.430 fused_ordering(335) 00:14:51.430 fused_ordering(336) 00:14:51.430 fused_ordering(337) 00:14:51.430 fused_ordering(338) 00:14:51.430 fused_ordering(339) 00:14:51.430 fused_ordering(340) 00:14:51.430 fused_ordering(341) 00:14:51.430 fused_ordering(342) 00:14:51.430 fused_ordering(343) 00:14:51.430 fused_ordering(344) 00:14:51.430 fused_ordering(345) 00:14:51.430 fused_ordering(346) 00:14:51.430 fused_ordering(347) 00:14:51.430 fused_ordering(348) 00:14:51.430 fused_ordering(349) 00:14:51.430 fused_ordering(350) 00:14:51.430 fused_ordering(351) 00:14:51.430 fused_ordering(352) 00:14:51.430 fused_ordering(353) 00:14:51.430 fused_ordering(354) 00:14:51.430 fused_ordering(355) 00:14:51.430 fused_ordering(356) 00:14:51.430 fused_ordering(357) 00:14:51.430 fused_ordering(358) 00:14:51.430 fused_ordering(359) 00:14:51.430 fused_ordering(360) 00:14:51.430 fused_ordering(361) 00:14:51.430 fused_ordering(362) 00:14:51.430 fused_ordering(363) 00:14:51.430 fused_ordering(364) 00:14:51.430 fused_ordering(365) 00:14:51.430 fused_ordering(366) 00:14:51.430 fused_ordering(367) 00:14:51.430 fused_ordering(368) 00:14:51.430 fused_ordering(369) 00:14:51.430 fused_ordering(370) 00:14:51.430 fused_ordering(371) 00:14:51.430 fused_ordering(372) 00:14:51.430 fused_ordering(373) 00:14:51.430 fused_ordering(374) 00:14:51.430 fused_ordering(375) 00:14:51.430 fused_ordering(376) 00:14:51.430 fused_ordering(377) 00:14:51.430 fused_ordering(378) 00:14:51.430 fused_ordering(379) 00:14:51.430 fused_ordering(380) 00:14:51.430 fused_ordering(381) 00:14:51.430 fused_ordering(382) 00:14:51.430 fused_ordering(383) 00:14:51.430 fused_ordering(384) 00:14:51.430 fused_ordering(385) 00:14:51.430 fused_ordering(386) 00:14:51.430 fused_ordering(387) 00:14:51.430 fused_ordering(388) 00:14:51.430 fused_ordering(389) 00:14:51.430 fused_ordering(390) 00:14:51.430 fused_ordering(391) 00:14:51.430 fused_ordering(392) 00:14:51.430 fused_ordering(393) 00:14:51.430 fused_ordering(394) 00:14:51.430 fused_ordering(395) 00:14:51.430 fused_ordering(396) 00:14:51.430 fused_ordering(397) 00:14:51.430 fused_ordering(398) 00:14:51.430 fused_ordering(399) 00:14:51.430 fused_ordering(400) 00:14:51.430 fused_ordering(401) 00:14:51.430 fused_ordering(402) 00:14:51.430 fused_ordering(403) 00:14:51.430 fused_ordering(404) 00:14:51.430 fused_ordering(405) 00:14:51.430 fused_ordering(406) 00:14:51.430 fused_ordering(407) 00:14:51.430 fused_ordering(408) 00:14:51.430 fused_ordering(409) 00:14:51.430 fused_ordering(410) 00:14:51.689 fused_ordering(411) 00:14:51.689 fused_ordering(412) 00:14:51.689 fused_ordering(413) 00:14:51.689 fused_ordering(414) 00:14:51.689 fused_ordering(415) 00:14:51.689 fused_ordering(416) 00:14:51.689 fused_ordering(417) 00:14:51.689 fused_ordering(418) 00:14:51.689 fused_ordering(419) 00:14:51.689 fused_ordering(420) 00:14:51.689 fused_ordering(421) 00:14:51.689 fused_ordering(422) 00:14:51.689 fused_ordering(423) 00:14:51.689 fused_ordering(424) 00:14:51.689 fused_ordering(425) 00:14:51.689 fused_ordering(426) 00:14:51.689 fused_ordering(427) 00:14:51.689 fused_ordering(428) 00:14:51.689 fused_ordering(429) 00:14:51.689 fused_ordering(430) 00:14:51.689 fused_ordering(431) 00:14:51.689 fused_ordering(432) 00:14:51.689 fused_ordering(433) 00:14:51.689 fused_ordering(434) 00:14:51.689 fused_ordering(435) 00:14:51.689 fused_ordering(436) 00:14:51.689 fused_ordering(437) 00:14:51.689 fused_ordering(438) 00:14:51.689 fused_ordering(439) 00:14:51.689 fused_ordering(440) 00:14:51.689 fused_ordering(441) 00:14:51.689 fused_ordering(442) 00:14:51.689 fused_ordering(443) 00:14:51.689 fused_ordering(444) 00:14:51.689 fused_ordering(445) 00:14:51.689 fused_ordering(446) 00:14:51.689 fused_ordering(447) 00:14:51.689 fused_ordering(448) 00:14:51.689 fused_ordering(449) 00:14:51.689 fused_ordering(450) 00:14:51.689 fused_ordering(451) 00:14:51.689 fused_ordering(452) 00:14:51.689 fused_ordering(453) 00:14:51.689 fused_ordering(454) 00:14:51.689 fused_ordering(455) 00:14:51.689 fused_ordering(456) 00:14:51.689 fused_ordering(457) 00:14:51.689 fused_ordering(458) 00:14:51.689 fused_ordering(459) 00:14:51.689 fused_ordering(460) 00:14:51.689 fused_ordering(461) 00:14:51.689 fused_ordering(462) 00:14:51.689 fused_ordering(463) 00:14:51.689 fused_ordering(464) 00:14:51.689 fused_ordering(465) 00:14:51.689 fused_ordering(466) 00:14:51.689 fused_ordering(467) 00:14:51.689 fused_ordering(468) 00:14:51.689 fused_ordering(469) 00:14:51.689 fused_ordering(470) 00:14:51.689 fused_ordering(471) 00:14:51.689 fused_ordering(472) 00:14:51.689 fused_ordering(473) 00:14:51.689 fused_ordering(474) 00:14:51.689 fused_ordering(475) 00:14:51.689 fused_ordering(476) 00:14:51.689 fused_ordering(477) 00:14:51.689 fused_ordering(478) 00:14:51.689 fused_ordering(479) 00:14:51.689 fused_ordering(480) 00:14:51.689 fused_ordering(481) 00:14:51.689 fused_ordering(482) 00:14:51.689 fused_ordering(483) 00:14:51.689 fused_ordering(484) 00:14:51.689 fused_ordering(485) 00:14:51.689 fused_ordering(486) 00:14:51.689 fused_ordering(487) 00:14:51.689 fused_ordering(488) 00:14:51.689 fused_ordering(489) 00:14:51.689 fused_ordering(490) 00:14:51.689 fused_ordering(491) 00:14:51.689 fused_ordering(492) 00:14:51.689 fused_ordering(493) 00:14:51.689 fused_ordering(494) 00:14:51.689 fused_ordering(495) 00:14:51.689 fused_ordering(496) 00:14:51.689 fused_ordering(497) 00:14:51.689 fused_ordering(498) 00:14:51.689 fused_ordering(499) 00:14:51.689 fused_ordering(500) 00:14:51.689 fused_ordering(501) 00:14:51.689 fused_ordering(502) 00:14:51.689 fused_ordering(503) 00:14:51.689 fused_ordering(504) 00:14:51.689 fused_ordering(505) 00:14:51.689 fused_ordering(506) 00:14:51.689 fused_ordering(507) 00:14:51.689 fused_ordering(508) 00:14:51.689 fused_ordering(509) 00:14:51.689 fused_ordering(510) 00:14:51.689 fused_ordering(511) 00:14:51.689 fused_ordering(512) 00:14:51.689 fused_ordering(513) 00:14:51.689 fused_ordering(514) 00:14:51.689 fused_ordering(515) 00:14:51.689 fused_ordering(516) 00:14:51.689 fused_ordering(517) 00:14:51.689 fused_ordering(518) 00:14:51.689 fused_ordering(519) 00:14:51.689 fused_ordering(520) 00:14:51.689 fused_ordering(521) 00:14:51.689 fused_ordering(522) 00:14:51.689 fused_ordering(523) 00:14:51.689 fused_ordering(524) 00:14:51.689 fused_ordering(525) 00:14:51.689 fused_ordering(526) 00:14:51.689 fused_ordering(527) 00:14:51.689 fused_ordering(528) 00:14:51.689 fused_ordering(529) 00:14:51.689 fused_ordering(530) 00:14:51.689 fused_ordering(531) 00:14:51.689 fused_ordering(532) 00:14:51.689 fused_ordering(533) 00:14:51.689 fused_ordering(534) 00:14:51.689 fused_ordering(535) 00:14:51.689 fused_ordering(536) 00:14:51.689 fused_ordering(537) 00:14:51.689 fused_ordering(538) 00:14:51.689 fused_ordering(539) 00:14:51.689 fused_ordering(540) 00:14:51.689 fused_ordering(541) 00:14:51.689 fused_ordering(542) 00:14:51.689 fused_ordering(543) 00:14:51.689 fused_ordering(544) 00:14:51.689 fused_ordering(545) 00:14:51.689 fused_ordering(546) 00:14:51.689 fused_ordering(547) 00:14:51.689 fused_ordering(548) 00:14:51.689 fused_ordering(549) 00:14:51.690 fused_ordering(550) 00:14:51.690 fused_ordering(551) 00:14:51.690 fused_ordering(552) 00:14:51.690 fused_ordering(553) 00:14:51.690 fused_ordering(554) 00:14:51.690 fused_ordering(555) 00:14:51.690 fused_ordering(556) 00:14:51.690 fused_ordering(557) 00:14:51.690 fused_ordering(558) 00:14:51.690 fused_ordering(559) 00:14:51.690 fused_ordering(560) 00:14:51.690 fused_ordering(561) 00:14:51.690 fused_ordering(562) 00:14:51.690 fused_ordering(563) 00:14:51.690 fused_ordering(564) 00:14:51.690 fused_ordering(565) 00:14:51.690 fused_ordering(566) 00:14:51.690 fused_ordering(567) 00:14:51.690 fused_ordering(568) 00:14:51.690 fused_ordering(569) 00:14:51.690 fused_ordering(570) 00:14:51.690 fused_ordering(571) 00:14:51.690 fused_ordering(572) 00:14:51.690 fused_ordering(573) 00:14:51.690 fused_ordering(574) 00:14:51.690 fused_ordering(575) 00:14:51.690 fused_ordering(576) 00:14:51.690 fused_ordering(577) 00:14:51.690 fused_ordering(578) 00:14:51.690 fused_ordering(579) 00:14:51.690 fused_ordering(580) 00:14:51.690 fused_ordering(581) 00:14:51.690 fused_ordering(582) 00:14:51.690 fused_ordering(583) 00:14:51.690 fused_ordering(584) 00:14:51.690 fused_ordering(585) 00:14:51.690 fused_ordering(586) 00:14:51.690 fused_ordering(587) 00:14:51.690 fused_ordering(588) 00:14:51.690 fused_ordering(589) 00:14:51.690 fused_ordering(590) 00:14:51.690 fused_ordering(591) 00:14:51.690 fused_ordering(592) 00:14:51.690 fused_ordering(593) 00:14:51.690 fused_ordering(594) 00:14:51.690 fused_ordering(595) 00:14:51.690 fused_ordering(596) 00:14:51.690 fused_ordering(597) 00:14:51.690 fused_ordering(598) 00:14:51.690 fused_ordering(599) 00:14:51.690 fused_ordering(600) 00:14:51.690 fused_ordering(601) 00:14:51.690 fused_ordering(602) 00:14:51.690 fused_ordering(603) 00:14:51.690 fused_ordering(604) 00:14:51.690 fused_ordering(605) 00:14:51.690 fused_ordering(606) 00:14:51.690 fused_ordering(607) 00:14:51.690 fused_ordering(608) 00:14:51.690 fused_ordering(609) 00:14:51.690 fused_ordering(610) 00:14:51.690 fused_ordering(611) 00:14:51.690 fused_ordering(612) 00:14:51.690 fused_ordering(613) 00:14:51.690 fused_ordering(614) 00:14:51.690 fused_ordering(615) 00:14:52.257 fused_ordering(616) 00:14:52.257 fused_ordering(617) 00:14:52.257 fused_ordering(618) 00:14:52.257 fused_ordering(619) 00:14:52.257 fused_ordering(620) 00:14:52.257 fused_ordering(621) 00:14:52.257 fused_ordering(622) 00:14:52.257 fused_ordering(623) 00:14:52.257 fused_ordering(624) 00:14:52.257 fused_ordering(625) 00:14:52.257 fused_ordering(626) 00:14:52.257 fused_ordering(627) 00:14:52.257 fused_ordering(628) 00:14:52.257 fused_ordering(629) 00:14:52.257 fused_ordering(630) 00:14:52.257 fused_ordering(631) 00:14:52.257 fused_ordering(632) 00:14:52.257 fused_ordering(633) 00:14:52.257 fused_ordering(634) 00:14:52.257 fused_ordering(635) 00:14:52.257 fused_ordering(636) 00:14:52.257 fused_ordering(637) 00:14:52.257 fused_ordering(638) 00:14:52.257 fused_ordering(639) 00:14:52.257 fused_ordering(640) 00:14:52.257 fused_ordering(641) 00:14:52.257 fused_ordering(642) 00:14:52.257 fused_ordering(643) 00:14:52.257 fused_ordering(644) 00:14:52.257 fused_ordering(645) 00:14:52.257 fused_ordering(646) 00:14:52.257 fused_ordering(647) 00:14:52.257 fused_ordering(648) 00:14:52.257 fused_ordering(649) 00:14:52.257 fused_ordering(650) 00:14:52.257 fused_ordering(651) 00:14:52.257 fused_ordering(652) 00:14:52.257 fused_ordering(653) 00:14:52.257 fused_ordering(654) 00:14:52.257 fused_ordering(655) 00:14:52.257 fused_ordering(656) 00:14:52.257 fused_ordering(657) 00:14:52.257 fused_ordering(658) 00:14:52.257 fused_ordering(659) 00:14:52.257 fused_ordering(660) 00:14:52.257 fused_ordering(661) 00:14:52.257 fused_ordering(662) 00:14:52.257 fused_ordering(663) 00:14:52.257 fused_ordering(664) 00:14:52.257 fused_ordering(665) 00:14:52.257 fused_ordering(666) 00:14:52.257 fused_ordering(667) 00:14:52.257 fused_ordering(668) 00:14:52.257 fused_ordering(669) 00:14:52.257 fused_ordering(670) 00:14:52.257 fused_ordering(671) 00:14:52.258 fused_ordering(672) 00:14:52.258 fused_ordering(673) 00:14:52.258 fused_ordering(674) 00:14:52.258 fused_ordering(675) 00:14:52.258 fused_ordering(676) 00:14:52.258 fused_ordering(677) 00:14:52.258 fused_ordering(678) 00:14:52.258 fused_ordering(679) 00:14:52.258 fused_ordering(680) 00:14:52.258 fused_ordering(681) 00:14:52.258 fused_ordering(682) 00:14:52.258 fused_ordering(683) 00:14:52.258 fused_ordering(684) 00:14:52.258 fused_ordering(685) 00:14:52.258 fused_ordering(686) 00:14:52.258 fused_ordering(687) 00:14:52.258 fused_ordering(688) 00:14:52.258 fused_ordering(689) 00:14:52.258 fused_ordering(690) 00:14:52.258 fused_ordering(691) 00:14:52.258 fused_ordering(692) 00:14:52.258 fused_ordering(693) 00:14:52.258 fused_ordering(694) 00:14:52.258 fused_ordering(695) 00:14:52.258 fused_ordering(696) 00:14:52.258 fused_ordering(697) 00:14:52.258 fused_ordering(698) 00:14:52.258 fused_ordering(699) 00:14:52.258 fused_ordering(700) 00:14:52.258 fused_ordering(701) 00:14:52.258 fused_ordering(702) 00:14:52.258 fused_ordering(703) 00:14:52.258 fused_ordering(704) 00:14:52.258 fused_ordering(705) 00:14:52.258 fused_ordering(706) 00:14:52.258 fused_ordering(707) 00:14:52.258 fused_ordering(708) 00:14:52.258 fused_ordering(709) 00:14:52.258 fused_ordering(710) 00:14:52.258 fused_ordering(711) 00:14:52.258 fused_ordering(712) 00:14:52.258 fused_ordering(713) 00:14:52.258 fused_ordering(714) 00:14:52.258 fused_ordering(715) 00:14:52.258 fused_ordering(716) 00:14:52.258 fused_ordering(717) 00:14:52.258 fused_ordering(718) 00:14:52.258 fused_ordering(719) 00:14:52.258 fused_ordering(720) 00:14:52.258 fused_ordering(721) 00:14:52.258 fused_ordering(722) 00:14:52.258 fused_ordering(723) 00:14:52.258 fused_ordering(724) 00:14:52.258 fused_ordering(725) 00:14:52.258 fused_ordering(726) 00:14:52.258 fused_ordering(727) 00:14:52.258 fused_ordering(728) 00:14:52.258 fused_ordering(729) 00:14:52.258 fused_ordering(730) 00:14:52.258 fused_ordering(731) 00:14:52.258 fused_ordering(732) 00:14:52.258 fused_ordering(733) 00:14:52.258 fused_ordering(734) 00:14:52.258 fused_ordering(735) 00:14:52.258 fused_ordering(736) 00:14:52.258 fused_ordering(737) 00:14:52.258 fused_ordering(738) 00:14:52.258 fused_ordering(739) 00:14:52.258 fused_ordering(740) 00:14:52.258 fused_ordering(741) 00:14:52.258 fused_ordering(742) 00:14:52.258 fused_ordering(743) 00:14:52.258 fused_ordering(744) 00:14:52.258 fused_ordering(745) 00:14:52.258 fused_ordering(746) 00:14:52.258 fused_ordering(747) 00:14:52.258 fused_ordering(748) 00:14:52.258 fused_ordering(749) 00:14:52.258 fused_ordering(750) 00:14:52.258 fused_ordering(751) 00:14:52.258 fused_ordering(752) 00:14:52.258 fused_ordering(753) 00:14:52.258 fused_ordering(754) 00:14:52.258 fused_ordering(755) 00:14:52.258 fused_ordering(756) 00:14:52.258 fused_ordering(757) 00:14:52.258 fused_ordering(758) 00:14:52.258 fused_ordering(759) 00:14:52.258 fused_ordering(760) 00:14:52.258 fused_ordering(761) 00:14:52.258 fused_ordering(762) 00:14:52.258 fused_ordering(763) 00:14:52.258 fused_ordering(764) 00:14:52.258 fused_ordering(765) 00:14:52.258 fused_ordering(766) 00:14:52.258 fused_ordering(767) 00:14:52.258 fused_ordering(768) 00:14:52.258 fused_ordering(769) 00:14:52.258 fused_ordering(770) 00:14:52.258 fused_ordering(771) 00:14:52.258 fused_ordering(772) 00:14:52.258 fused_ordering(773) 00:14:52.258 fused_ordering(774) 00:14:52.258 fused_ordering(775) 00:14:52.258 fused_ordering(776) 00:14:52.258 fused_ordering(777) 00:14:52.258 fused_ordering(778) 00:14:52.258 fused_ordering(779) 00:14:52.258 fused_ordering(780) 00:14:52.258 fused_ordering(781) 00:14:52.258 fused_ordering(782) 00:14:52.258 fused_ordering(783) 00:14:52.258 fused_ordering(784) 00:14:52.258 fused_ordering(785) 00:14:52.258 fused_ordering(786) 00:14:52.258 fused_ordering(787) 00:14:52.258 fused_ordering(788) 00:14:52.258 fused_ordering(789) 00:14:52.258 fused_ordering(790) 00:14:52.258 fused_ordering(791) 00:14:52.258 fused_ordering(792) 00:14:52.258 fused_ordering(793) 00:14:52.258 fused_ordering(794) 00:14:52.258 fused_ordering(795) 00:14:52.258 fused_ordering(796) 00:14:52.258 fused_ordering(797) 00:14:52.258 fused_ordering(798) 00:14:52.258 fused_ordering(799) 00:14:52.258 fused_ordering(800) 00:14:52.258 fused_ordering(801) 00:14:52.258 fused_ordering(802) 00:14:52.258 fused_ordering(803) 00:14:52.258 fused_ordering(804) 00:14:52.258 fused_ordering(805) 00:14:52.258 fused_ordering(806) 00:14:52.258 fused_ordering(807) 00:14:52.258 fused_ordering(808) 00:14:52.258 fused_ordering(809) 00:14:52.258 fused_ordering(810) 00:14:52.258 fused_ordering(811) 00:14:52.258 fused_ordering(812) 00:14:52.258 fused_ordering(813) 00:14:52.258 fused_ordering(814) 00:14:52.258 fused_ordering(815) 00:14:52.258 fused_ordering(816) 00:14:52.258 fused_ordering(817) 00:14:52.258 fused_ordering(818) 00:14:52.258 fused_ordering(819) 00:14:52.258 fused_ordering(820) 00:14:52.826 fused_ordering(821) 00:14:52.826 fused_ordering(822) 00:14:52.826 fused_ordering(823) 00:14:52.826 fused_ordering(824) 00:14:52.826 fused_ordering(825) 00:14:52.826 fused_ordering(826) 00:14:52.826 fused_ordering(827) 00:14:52.826 fused_ordering(828) 00:14:52.826 fused_ordering(829) 00:14:52.826 fused_ordering(830) 00:14:52.826 fused_ordering(831) 00:14:52.826 fused_ordering(832) 00:14:52.826 fused_ordering(833) 00:14:52.826 fused_ordering(834) 00:14:52.826 fused_ordering(835) 00:14:52.826 fused_ordering(836) 00:14:52.826 fused_ordering(837) 00:14:52.826 fused_ordering(838) 00:14:52.826 fused_ordering(839) 00:14:52.826 fused_ordering(840) 00:14:52.826 fused_ordering(841) 00:14:52.826 fused_ordering(842) 00:14:52.826 fused_ordering(843) 00:14:52.826 fused_ordering(844) 00:14:52.826 fused_ordering(845) 00:14:52.826 fused_ordering(846) 00:14:52.826 fused_ordering(847) 00:14:52.826 fused_ordering(848) 00:14:52.826 fused_ordering(849) 00:14:52.826 fused_ordering(850) 00:14:52.826 fused_ordering(851) 00:14:52.826 fused_ordering(852) 00:14:52.826 fused_ordering(853) 00:14:52.826 fused_ordering(854) 00:14:52.826 fused_ordering(855) 00:14:52.826 fused_ordering(856) 00:14:52.826 fused_ordering(857) 00:14:52.826 fused_ordering(858) 00:14:52.826 fused_ordering(859) 00:14:52.826 fused_ordering(860) 00:14:52.826 fused_ordering(861) 00:14:52.826 fused_ordering(862) 00:14:52.826 fused_ordering(863) 00:14:52.826 fused_ordering(864) 00:14:52.826 fused_ordering(865) 00:14:52.826 fused_ordering(866) 00:14:52.826 fused_ordering(867) 00:14:52.826 fused_ordering(868) 00:14:52.826 fused_ordering(869) 00:14:52.826 fused_ordering(870) 00:14:52.826 fused_ordering(871) 00:14:52.826 fused_ordering(872) 00:14:52.826 fused_ordering(873) 00:14:52.826 fused_ordering(874) 00:14:52.826 fused_ordering(875) 00:14:52.826 fused_ordering(876) 00:14:52.826 fused_ordering(877) 00:14:52.826 fused_ordering(878) 00:14:52.826 fused_ordering(879) 00:14:52.826 fused_ordering(880) 00:14:52.826 fused_ordering(881) 00:14:52.826 fused_ordering(882) 00:14:52.826 fused_ordering(883) 00:14:52.826 fused_ordering(884) 00:14:52.826 fused_ordering(885) 00:14:52.826 fused_ordering(886) 00:14:52.826 fused_ordering(887) 00:14:52.826 fused_ordering(888) 00:14:52.826 fused_ordering(889) 00:14:52.826 fused_ordering(890) 00:14:52.826 fused_ordering(891) 00:14:52.826 fused_ordering(892) 00:14:52.826 fused_ordering(893) 00:14:52.826 fused_ordering(894) 00:14:52.826 fused_ordering(895) 00:14:52.826 fused_ordering(896) 00:14:52.826 fused_ordering(897) 00:14:52.826 fused_ordering(898) 00:14:52.826 fused_ordering(899) 00:14:52.826 fused_ordering(900) 00:14:52.826 fused_ordering(901) 00:14:52.826 fused_ordering(902) 00:14:52.826 fused_ordering(903) 00:14:52.826 fused_ordering(904) 00:14:52.826 fused_ordering(905) 00:14:52.826 fused_ordering(906) 00:14:52.826 fused_ordering(907) 00:14:52.826 fused_ordering(908) 00:14:52.826 fused_ordering(909) 00:14:52.826 fused_ordering(910) 00:14:52.826 fused_ordering(911) 00:14:52.826 fused_ordering(912) 00:14:52.826 fused_ordering(913) 00:14:52.826 fused_ordering(914) 00:14:52.826 fused_ordering(915) 00:14:52.826 fused_ordering(916) 00:14:52.826 fused_ordering(917) 00:14:52.826 fused_ordering(918) 00:14:52.826 fused_ordering(919) 00:14:52.826 fused_ordering(920) 00:14:52.826 fused_ordering(921) 00:14:52.826 fused_ordering(922) 00:14:52.826 fused_ordering(923) 00:14:52.826 fused_ordering(924) 00:14:52.826 fused_ordering(925) 00:14:52.826 fused_ordering(926) 00:14:52.826 fused_ordering(927) 00:14:52.826 fused_ordering(928) 00:14:52.826 fused_ordering(929) 00:14:52.826 fused_ordering(930) 00:14:52.826 fused_ordering(931) 00:14:52.826 fused_ordering(932) 00:14:52.826 fused_ordering(933) 00:14:52.826 fused_ordering(934) 00:14:52.826 fused_ordering(935) 00:14:52.826 fused_ordering(936) 00:14:52.826 fused_ordering(937) 00:14:52.826 fused_ordering(938) 00:14:52.826 fused_ordering(939) 00:14:52.826 fused_ordering(940) 00:14:52.827 fused_ordering(941) 00:14:52.827 fused_ordering(942) 00:14:52.827 fused_ordering(943) 00:14:52.827 fused_ordering(944) 00:14:52.827 fused_ordering(945) 00:14:52.827 fused_ordering(946) 00:14:52.827 fused_ordering(947) 00:14:52.827 fused_ordering(948) 00:14:52.827 fused_ordering(949) 00:14:52.827 fused_ordering(950) 00:14:52.827 fused_ordering(951) 00:14:52.827 fused_ordering(952) 00:14:52.827 fused_ordering(953) 00:14:52.827 fused_ordering(954) 00:14:52.827 fused_ordering(955) 00:14:52.827 fused_ordering(956) 00:14:52.827 fused_ordering(957) 00:14:52.827 fused_ordering(958) 00:14:52.827 fused_ordering(959) 00:14:52.827 fused_ordering(960) 00:14:52.827 fused_ordering(961) 00:14:52.827 fused_ordering(962) 00:14:52.827 fused_ordering(963) 00:14:52.827 fused_ordering(964) 00:14:52.827 fused_ordering(965) 00:14:52.827 fused_ordering(966) 00:14:52.827 fused_ordering(967) 00:14:52.827 fused_ordering(968) 00:14:52.827 fused_ordering(969) 00:14:52.827 fused_ordering(970) 00:14:52.827 fused_ordering(971) 00:14:52.827 fused_ordering(972) 00:14:52.827 fused_ordering(973) 00:14:52.827 fused_ordering(974) 00:14:52.827 fused_ordering(975) 00:14:52.827 fused_ordering(976) 00:14:52.827 fused_ordering(977) 00:14:52.827 fused_ordering(978) 00:14:52.827 fused_ordering(979) 00:14:52.827 fused_ordering(980) 00:14:52.827 fused_ordering(981) 00:14:52.827 fused_ordering(982) 00:14:52.827 fused_ordering(983) 00:14:52.827 fused_ordering(984) 00:14:52.827 fused_ordering(985) 00:14:52.827 fused_ordering(986) 00:14:52.827 fused_ordering(987) 00:14:52.827 fused_ordering(988) 00:14:52.827 fused_ordering(989) 00:14:52.827 fused_ordering(990) 00:14:52.827 fused_ordering(991) 00:14:52.827 fused_ordering(992) 00:14:52.827 fused_ordering(993) 00:14:52.827 fused_ordering(994) 00:14:52.827 fused_ordering(995) 00:14:52.827 fused_ordering(996) 00:14:52.827 fused_ordering(997) 00:14:52.827 fused_ordering(998) 00:14:52.827 fused_ordering(999) 00:14:52.827 fused_ordering(1000) 00:14:52.827 fused_ordering(1001) 00:14:52.827 fused_ordering(1002) 00:14:52.827 fused_ordering(1003) 00:14:52.827 fused_ordering(1004) 00:14:52.827 fused_ordering(1005) 00:14:52.827 fused_ordering(1006) 00:14:52.827 fused_ordering(1007) 00:14:52.827 fused_ordering(1008) 00:14:52.827 fused_ordering(1009) 00:14:52.827 fused_ordering(1010) 00:14:52.827 fused_ordering(1011) 00:14:52.827 fused_ordering(1012) 00:14:52.827 fused_ordering(1013) 00:14:52.827 fused_ordering(1014) 00:14:52.827 fused_ordering(1015) 00:14:52.827 fused_ordering(1016) 00:14:52.827 fused_ordering(1017) 00:14:52.827 fused_ordering(1018) 00:14:52.827 fused_ordering(1019) 00:14:52.827 fused_ordering(1020) 00:14:52.827 fused_ordering(1021) 00:14:52.827 fused_ordering(1022) 00:14:52.827 fused_ordering(1023) 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.827 rmmod nvme_tcp 00:14:52.827 rmmod nvme_fabrics 00:14:52.827 rmmod nvme_keyring 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1067071 ']' 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1067071 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1067071 ']' 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1067071 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1067071 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1067071' 00:14:52.827 killing process with pid 1067071 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1067071 00:14:52.827 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1067071 00:14:53.086 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.086 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.086 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.086 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.086 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.086 12:04:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.086 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.086 12:04:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.991 12:04:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:54.991 00:14:54.991 real 0m10.540s 00:14:54.991 user 0m4.913s 00:14:54.991 sys 0m5.849s 00:14:54.991 12:04:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:54.991 12:04:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:54.991 ************************************ 00:14:54.991 END TEST nvmf_fused_ordering 00:14:54.991 ************************************ 00:14:54.991 12:04:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:54.991 12:04:44 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:54.991 12:04:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:54.991 12:04:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.991 12:04:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.250 ************************************ 00:14:55.250 START TEST nvmf_delete_subsystem 00:14:55.250 ************************************ 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:55.250 * Looking for test storage... 00:14:55.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.250 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.251 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.251 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.251 12:04:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:01.868 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:01.868 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:01.868 Found net devices under 0000:86:00.0: cvl_0_0 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:01.868 Found net devices under 0000:86:00.1: cvl_0_1 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:15:01.868 00:15:01.868 --- 10.0.0.2 ping statistics --- 00:15:01.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.868 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:15:01.868 00:15:01.868 --- 10.0.0.1 ping statistics --- 00:15:01.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.868 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:01.868 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1070909 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1070909 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1070909 ']' 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.869 12:04:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 [2024-07-15 12:04:50.967087] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:15:01.869 [2024-07-15 12:04:50.967133] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.869 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.869 [2024-07-15 12:04:51.040188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:01.869 [2024-07-15 12:04:51.081267] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.869 [2024-07-15 12:04:51.081305] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.869 [2024-07-15 12:04:51.081313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.869 [2024-07-15 12:04:51.081319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.869 [2024-07-15 12:04:51.081326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.869 [2024-07-15 12:04:51.081382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.869 [2024-07-15 12:04:51.081383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 [2024-07-15 12:04:51.211099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 [2024-07-15 12:04:51.231271] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 NULL1 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 Delay0 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1071079 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:01.869 12:04:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:01.869 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.869 [2024-07-15 12:04:51.322003] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:03.815 12:04:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.815 12:04:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.815 12:04:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:03.815 Read completed with error (sct=0, sc=8) 00:15:03.815 Read completed with error (sct=0, sc=8) 00:15:03.815 Write completed with error (sct=0, sc=8) 00:15:03.815 starting I/O failed: -6 00:15:03.815 Read completed with error (sct=0, sc=8) 00:15:03.815 Read completed with error (sct=0, sc=8) 00:15:03.815 Read completed with error (sct=0, sc=8) 00:15:03.815 Read completed with error (sct=0, sc=8) 00:15:03.815 starting I/O failed: -6 00:15:03.815 Read completed with error (sct=0, sc=8) 00:15:03.815 Write completed with error (sct=0, sc=8) 00:15:03.815 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 [2024-07-15 12:04:53.566121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dd5f0 is same with the state(5) to be set 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 [2024-07-15 12:04:53.567336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dd230 is same with the state(5) to be set 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 [2024-07-15 12:04:53.570941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f826c00d450 is same with the state(5) to be set 00:15:03.816 starting I/O failed: -6 00:15:03.816 starting I/O failed: -6 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:03.816 Write completed with error (sct=0, sc=8) 00:15:03.816 Read completed with error (sct=0, sc=8) 00:15:04.747 [2024-07-15 12:04:54.544120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18db330 is same with the state(5) to be set 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 [2024-07-15 12:04:54.569322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dd050 is same with the state(5) to be set 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 [2024-07-15 12:04:54.569682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dd410 is same with the state(5) to be set 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 [2024-07-15 12:04:54.573070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f826c00cfe0 is same with the state(5) to be set 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.747 Write completed with error (sct=0, sc=8) 00:15:04.747 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Write completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Write completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Write completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Write completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Read completed with error (sct=0, sc=8) 00:15:04.748 Write completed with error (sct=0, sc=8) 00:15:04.748 [2024-07-15 12:04:54.573474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f826c00d760 is same with the state(5) to be set 00:15:04.748 Initializing NVMe Controllers 00:15:04.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.748 Controller IO queue size 128, less than required. 00:15:04.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:04.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:04.748 Initialization complete. Launching workers. 00:15:04.748 ======================================================== 00:15:04.748 Latency(us) 00:15:04.748 Device Information : IOPS MiB/s Average min max 00:15:04.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.89 0.08 892082.67 542.07 1006297.54 00:15:04.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.41 0.08 925918.77 233.52 2002534.60 00:15:04.748 ======================================================== 00:15:04.748 Total : 336.30 0.16 908725.02 233.52 2002534.60 00:15:04.748 00:15:04.748 [2024-07-15 12:04:54.574001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18db330 (9): Bad file descriptor 00:15:04.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:04.748 12:04:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.748 12:04:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:04.748 12:04:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1071079 00:15:04.748 12:04:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1071079 00:15:05.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1071079) - No such process 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1071079 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1071079 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1071079 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:05.314 [2024-07-15 12:04:55.099488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1071627 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071627 00:15:05.314 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.314 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.314 [2024-07-15 12:04:55.163530] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:05.879 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.879 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071627 00:15:05.879 12:04:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.138 12:04:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.138 12:04:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071627 00:15:06.138 12:04:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.704 12:04:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.704 12:04:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071627 00:15:06.704 12:04:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:07.270 12:04:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:07.270 12:04:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071627 00:15:07.270 12:04:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:07.836 12:04:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:07.836 12:04:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071627 00:15:07.836 12:04:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:08.401 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:08.401 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071627 00:15:08.401 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:08.659 Initializing NVMe Controllers 00:15:08.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.659 Controller IO queue size 128, less than required. 00:15:08.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:08.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:08.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:08.659 Initialization complete. Launching workers. 00:15:08.659 ======================================================== 00:15:08.659 Latency(us) 00:15:08.659 Device Information : IOPS MiB/s Average min max 00:15:08.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002030.14 1000144.57 1005428.22 00:15:08.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004217.26 1000300.55 1042570.30 00:15:08.659 ======================================================== 00:15:08.659 Total : 256.00 0.12 1003123.70 1000144.57 1042570.30 00:15:08.659 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1071627 00:15:08.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1071627) - No such process 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1071627 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.659 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:08.659 rmmod nvme_tcp 00:15:08.918 rmmod nvme_fabrics 00:15:08.918 rmmod nvme_keyring 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1070909 ']' 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1070909 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1070909 ']' 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1070909 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1070909 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1070909' 00:15:08.918 killing process with pid 1070909 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1070909 00:15:08.918 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1070909 00:15:09.178 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:09.178 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:09.178 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:09.178 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:09.178 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:09.178 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.178 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.178 12:04:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.086 12:05:00 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:11.086 00:15:11.086 real 0m15.979s 00:15:11.086 user 0m29.441s 00:15:11.086 sys 0m5.298s 00:15:11.086 12:05:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:11.086 12:05:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:11.087 ************************************ 00:15:11.087 END TEST nvmf_delete_subsystem 00:15:11.087 ************************************ 00:15:11.087 12:05:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:11.087 12:05:01 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:11.087 12:05:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:11.087 12:05:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.087 12:05:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.087 ************************************ 00:15:11.087 START TEST nvmf_ns_masking 00:15:11.087 ************************************ 00:15:11.087 12:05:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:11.346 * Looking for test storage... 00:15:11.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.346 12:05:01 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=bd7e67e2-7c57-4449-97df-8102ecf4b1df 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1160329e-76a5-47b9-8feb-1be7cc947cae 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dd5b5168-43e0-4516-bff8-59d3087009cd 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:11.347 12:05:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.917 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.917 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.917 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.917 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.917 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.917 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.917 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.917 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:17.918 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:17.918 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:17.918 Found net devices under 0000:86:00.0: cvl_0_0 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:17.918 Found net devices under 0000:86:00.1: cvl_0_1 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:17.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:15:17.918 00:15:17.918 --- 10.0.0.2 ping statistics --- 00:15:17.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.918 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:15:17.918 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:15:17.918 00:15:17.919 --- 10.0.0.1 ping statistics --- 00:15:17.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.919 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1075777 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1075777 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1075777 ']' 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.919 12:05:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.919 [2024-07-15 12:05:07.021163] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:15:17.919 [2024-07-15 12:05:07.021204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.919 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.919 [2024-07-15 12:05:07.091274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.919 [2024-07-15 12:05:07.131139] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.919 [2024-07-15 12:05:07.131177] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.919 [2024-07-15 12:05:07.131184] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.919 [2024-07-15 12:05:07.131190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.919 [2024-07-15 12:05:07.131195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.919 [2024-07-15 12:05:07.131213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:17.919 [2024-07-15 12:05:07.411134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:17.919 Malloc1 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:17.919 Malloc2 00:15:17.919 12:05:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:18.178 12:05:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:18.178 12:05:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.437 [2024-07-15 12:05:08.324368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.437 12:05:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:18.437 12:05:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd5b5168-43e0-4516-bff8-59d3087009cd -a 10.0.0.2 -s 4420 -i 4 00:15:18.695 12:05:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:18.695 12:05:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:18.695 12:05:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:18.695 12:05:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:18.695 12:05:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:20.600 12:05:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:20.600 12:05:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:20.600 12:05:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:20.600 12:05:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:20.600 12:05:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:20.600 12:05:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:20.600 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:20.600 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:20.859 [ 0]:0x1 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d7cbca20f874c3eaea80f323a54627a 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d7cbca20f874c3eaea80f323a54627a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:20.859 [ 0]:0x1 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:20.859 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.118 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d7cbca20f874c3eaea80f323a54627a 00:15:21.118 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d7cbca20f874c3eaea80f323a54627a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.119 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:21.119 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:21.119 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:21.119 [ 1]:0x2 00:15:21.119 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:21.119 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:21.119 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9008a80fbc4442799f9646d6725c3e30 00:15:21.119 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9008a80fbc4442799f9646d6725c3e30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:21.119 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:21.119 12:05:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.378 12:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.647 12:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:21.647 12:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:21.647 12:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd5b5168-43e0-4516-bff8-59d3087009cd -a 10.0.0.2 -s 4420 -i 4 00:15:21.937 12:05:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:21.937 12:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:21.937 12:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:21.937 12:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:21.937 12:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:21.937 12:05:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:23.842 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:24.101 [ 0]:0x2 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9008a80fbc4442799f9646d6725c3e30 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9008a80fbc4442799f9646d6725c3e30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.101 12:05:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:24.364 [ 0]:0x1 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d7cbca20f874c3eaea80f323a54627a 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d7cbca20f874c3eaea80f323a54627a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:24.364 [ 1]:0x2 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9008a80fbc4442799f9646d6725c3e30 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9008a80fbc4442799f9646d6725c3e30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.364 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:24.623 [ 0]:0x2 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9008a80fbc4442799f9646d6725c3e30 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9008a80fbc4442799f9646d6725c3e30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.623 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:24.882 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:24.882 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dd5b5168-43e0-4516-bff8-59d3087009cd -a 10.0.0.2 -s 4420 -i 4 00:15:24.882 12:05:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:24.882 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:24.882 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.882 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:24.882 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:24.882 12:05:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:27.417 12:05:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:27.417 12:05:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:27.417 12:05:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.417 12:05:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:27.417 12:05:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.417 12:05:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:27.417 12:05:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:27.417 12:05:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:27.417 [ 0]:0x1 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9d7cbca20f874c3eaea80f323a54627a 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9d7cbca20f874c3eaea80f323a54627a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:27.417 [ 1]:0x2 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9008a80fbc4442799f9646d6725c3e30 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9008a80fbc4442799f9646d6725c3e30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:27.417 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:27.676 [ 0]:0x2 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9008a80fbc4442799f9646d6725c3e30 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9008a80fbc4442799f9646d6725c3e30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:27.676 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:27.676 [2024-07-15 12:05:17.659075] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:27.676 request: 00:15:27.676 { 00:15:27.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.676 "nsid": 2, 00:15:27.676 "host": "nqn.2016-06.io.spdk:host1", 00:15:27.676 "method": "nvmf_ns_remove_host", 00:15:27.676 "req_id": 1 00:15:27.676 } 00:15:27.676 Got JSON-RPC error response 00:15:27.676 response: 00:15:27.676 { 00:15:27.676 "code": -32602, 00:15:27.676 "message": "Invalid parameters" 00:15:27.676 } 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:27.935 [ 0]:0x2 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9008a80fbc4442799f9646d6725c3e30 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9008a80fbc4442799f9646d6725c3e30 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:27.935 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1077764 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1077764 /var/tmp/host.sock 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1077764 ']' 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:28.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.195 12:05:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:28.195 [2024-07-15 12:05:18.020957] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:15:28.195 [2024-07-15 12:05:18.021001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077764 ] 00:15:28.195 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.195 [2024-07-15 12:05:18.086536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.195 [2024-07-15 12:05:18.126709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.454 12:05:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.454 12:05:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:15:28.455 12:05:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:28.713 12:05:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:28.713 12:05:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid bd7e67e2-7c57-4449-97df-8102ecf4b1df 00:15:28.713 12:05:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:28.713 12:05:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BD7E67E27C57444997DF8102ECF4B1DF -i 00:15:28.971 12:05:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1160329e-76a5-47b9-8feb-1be7cc947cae 00:15:28.971 12:05:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:28.971 12:05:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1160329E76A547B98FEB1BE7CC947CAE -i 00:15:29.229 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:29.487 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:29.487 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:29.487 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:29.745 nvme0n1 00:15:29.745 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:29.745 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:30.002 nvme1n2 00:15:30.002 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:30.002 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:30.002 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:30.002 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:30.002 12:05:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:30.260 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:30.260 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:30.260 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:30.260 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:30.518 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ bd7e67e2-7c57-4449-97df-8102ecf4b1df == \b\d\7\e\6\7\e\2\-\7\c\5\7\-\4\4\4\9\-\9\7\d\f\-\8\1\0\2\e\c\f\4\b\1\d\f ]] 00:15:30.518 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:30.518 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:30.518 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1160329e-76a5-47b9-8feb-1be7cc947cae == \1\1\6\0\3\2\9\e\-\7\6\a\5\-\4\7\b\9\-\8\f\e\b\-\1\b\e\7\c\c\9\4\7\c\a\e ]] 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1077764 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1077764 ']' 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1077764 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1077764 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1077764' 00:15:30.775 killing process with pid 1077764 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1077764 00:15:30.775 12:05:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1077764 00:15:31.033 12:05:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.301 rmmod nvme_tcp 00:15:31.301 rmmod nvme_fabrics 00:15:31.301 rmmod nvme_keyring 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1075777 ']' 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1075777 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1075777 ']' 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1075777 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1075777 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1075777' 00:15:31.301 killing process with pid 1075777 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1075777 00:15:31.301 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1075777 00:15:31.562 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:31.562 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:31.562 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:31.562 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.562 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.562 12:05:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.562 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.562 12:05:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.094 12:05:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:34.094 00:15:34.094 real 0m22.418s 00:15:34.094 user 0m23.312s 00:15:34.094 sys 0m6.361s 00:15:34.094 12:05:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.094 12:05:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:34.094 ************************************ 00:15:34.094 END TEST nvmf_ns_masking 00:15:34.094 ************************************ 00:15:34.094 12:05:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:34.094 12:05:23 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:34.094 12:05:23 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:34.094 12:05:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:34.094 12:05:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.094 12:05:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:34.094 ************************************ 00:15:34.094 START TEST nvmf_nvme_cli 00:15:34.094 ************************************ 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:34.094 * Looking for test storage... 00:15:34.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.094 12:05:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.095 12:05:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:39.369 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:39.369 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:39.369 Found net devices under 0000:86:00.0: cvl_0_0 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:39.369 Found net devices under 0000:86:00.1: cvl_0_1 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:39.369 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:39.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:15:39.628 00:15:39.628 --- 10.0.0.2 ping statistics --- 00:15:39.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.628 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:15:39.628 00:15:39.628 --- 10.0.0.1 ping statistics --- 00:15:39.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.628 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1081782 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1081782 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1081782 ']' 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.628 12:05:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:39.628 [2024-07-15 12:05:29.513706] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:15:39.628 [2024-07-15 12:05:29.513752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.628 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.628 [2024-07-15 12:05:29.587753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.887 [2024-07-15 12:05:29.629963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.887 [2024-07-15 12:05:29.630006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.887 [2024-07-15 12:05:29.630013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.887 [2024-07-15 12:05:29.630019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.887 [2024-07-15 12:05:29.630025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.887 [2024-07-15 12:05:29.630083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.887 [2024-07-15 12:05:29.630126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.887 [2024-07-15 12:05:29.630208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.887 [2024-07-15 12:05:29.630209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.453 [2024-07-15 12:05:30.366191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.453 Malloc0 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.453 Malloc1 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.453 [2024-07-15 12:05:30.447475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.453 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:15:40.710 00:15:40.710 Discovery Log Number of Records 2, Generation counter 2 00:15:40.710 =====Discovery Log Entry 0====== 00:15:40.710 trtype: tcp 00:15:40.710 adrfam: ipv4 00:15:40.710 subtype: current discovery subsystem 00:15:40.710 treq: not required 00:15:40.710 portid: 0 00:15:40.710 trsvcid: 4420 00:15:40.710 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:40.710 traddr: 10.0.0.2 00:15:40.710 eflags: explicit discovery connections, duplicate discovery information 00:15:40.710 sectype: none 00:15:40.710 =====Discovery Log Entry 1====== 00:15:40.710 trtype: tcp 00:15:40.710 adrfam: ipv4 00:15:40.710 subtype: nvme subsystem 00:15:40.710 treq: not required 00:15:40.710 portid: 0 00:15:40.710 trsvcid: 4420 00:15:40.710 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:40.710 traddr: 10.0.0.2 00:15:40.710 eflags: none 00:15:40.710 sectype: none 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:40.710 12:05:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:42.113 12:05:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:42.113 12:05:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:42.113 12:05:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.113 12:05:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:42.113 12:05:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:42.113 12:05:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:44.015 /dev/nvme0n1 ]] 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.015 12:05:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:44.272 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:44.272 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.272 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:44.272 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.272 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:44.272 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:44.272 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.273 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:44.273 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:44.273 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:44.273 12:05:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:44.273 12:05:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.531 rmmod nvme_tcp 00:15:44.531 rmmod nvme_fabrics 00:15:44.531 rmmod nvme_keyring 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1081782 ']' 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1081782 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1081782 ']' 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1081782 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1081782 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1081782' 00:15:44.531 killing process with pid 1081782 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1081782 00:15:44.531 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1081782 00:15:44.790 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.790 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.790 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.790 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.790 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.790 12:05:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.790 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.790 12:05:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.324 12:05:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.324 00:15:47.324 real 0m13.185s 00:15:47.324 user 0m21.686s 00:15:47.324 sys 0m4.965s 00:15:47.324 12:05:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:47.324 12:05:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:47.325 ************************************ 00:15:47.325 END TEST nvmf_nvme_cli 00:15:47.325 ************************************ 00:15:47.325 12:05:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:47.325 12:05:36 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:47.325 12:05:36 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:47.325 12:05:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:47.325 12:05:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.325 12:05:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:47.325 ************************************ 00:15:47.325 START TEST nvmf_vfio_user 00:15:47.325 ************************************ 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:47.325 * Looking for test storage... 00:15:47.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1083071 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1083071' 00:15:47.325 Process pid: 1083071 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1083071 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1083071 ']' 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.325 12:05:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:47.325 [2024-07-15 12:05:36.974918] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:15:47.325 [2024-07-15 12:05:36.974969] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.325 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.325 [2024-07-15 12:05:37.042304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.325 [2024-07-15 12:05:37.083696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.325 [2024-07-15 12:05:37.083734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.325 [2024-07-15 12:05:37.083741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.325 [2024-07-15 12:05:37.083747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.325 [2024-07-15 12:05:37.083753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.325 [2024-07-15 12:05:37.083809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.325 [2024-07-15 12:05:37.083917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.325 [2024-07-15 12:05:37.084027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.325 [2024-07-15 12:05:37.084028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.325 12:05:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.325 12:05:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:47.325 12:05:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:48.282 12:05:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:48.541 12:05:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:48.541 12:05:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:48.541 12:05:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:48.541 12:05:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:48.541 12:05:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:48.800 Malloc1 00:15:48.800 12:05:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:48.800 12:05:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:49.058 12:05:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:49.317 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:49.317 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:49.317 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:49.576 Malloc2 00:15:49.576 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:49.576 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:49.835 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:50.095 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:50.095 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:50.095 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:50.095 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:50.095 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:50.095 12:05:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:50.095 [2024-07-15 12:05:39.903954] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:15:50.095 [2024-07-15 12:05:39.903992] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083563 ] 00:15:50.095 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.095 [2024-07-15 12:05:39.933732] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:50.095 [2024-07-15 12:05:39.943512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:50.095 [2024-07-15 12:05:39.943533] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc7628df000 00:15:50.095 [2024-07-15 12:05:39.944513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.095 [2024-07-15 12:05:39.945514] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.095 [2024-07-15 12:05:39.946517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.095 [2024-07-15 12:05:39.947515] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:50.095 [2024-07-15 12:05:39.948521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:50.095 [2024-07-15 12:05:39.949531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.095 [2024-07-15 12:05:39.950539] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:50.095 [2024-07-15 12:05:39.951542] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.095 [2024-07-15 12:05:39.952551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:50.095 [2024-07-15 12:05:39.952564] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc7616a5000 00:15:50.095 [2024-07-15 12:05:39.953505] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:50.095 [2024-07-15 12:05:39.966108] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:50.095 [2024-07-15 12:05:39.966130] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:50.095 [2024-07-15 12:05:39.968651] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:50.095 [2024-07-15 12:05:39.968686] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:50.095 [2024-07-15 12:05:39.968753] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:50.095 [2024-07-15 12:05:39.968770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:50.095 [2024-07-15 12:05:39.968775] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:50.095 [2024-07-15 12:05:39.969647] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:50.095 [2024-07-15 12:05:39.969657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:50.095 [2024-07-15 12:05:39.969664] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:50.095 [2024-07-15 12:05:39.970651] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:50.095 [2024-07-15 12:05:39.970660] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:50.095 [2024-07-15 12:05:39.970667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:50.095 [2024-07-15 12:05:39.971659] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:50.095 [2024-07-15 12:05:39.971667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:50.095 [2024-07-15 12:05:39.972662] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:50.095 [2024-07-15 12:05:39.972670] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:50.095 [2024-07-15 12:05:39.972674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:50.095 [2024-07-15 12:05:39.972680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:50.095 [2024-07-15 12:05:39.972784] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:50.095 [2024-07-15 12:05:39.972789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:50.095 [2024-07-15 12:05:39.972795] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:50.095 [2024-07-15 12:05:39.973665] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:50.095 [2024-07-15 12:05:39.974672] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:50.095 [2024-07-15 12:05:39.975681] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:50.095 [2024-07-15 12:05:39.976681] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:50.095 [2024-07-15 12:05:39.976741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:50.095 [2024-07-15 12:05:39.977692] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:50.095 [2024-07-15 12:05:39.977699] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:50.095 [2024-07-15 12:05:39.977703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:50.095 [2024-07-15 12:05:39.977720] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:50.095 [2024-07-15 12:05:39.977730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:50.095 [2024-07-15 12:05:39.977743] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:50.095 [2024-07-15 12:05:39.977748] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.095 [2024-07-15 12:05:39.977760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.095 [2024-07-15 12:05:39.977798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:50.095 [2024-07-15 12:05:39.977806] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:50.095 [2024-07-15 12:05:39.977813] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:50.095 [2024-07-15 12:05:39.977817] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:50.095 [2024-07-15 12:05:39.977821] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:50.095 [2024-07-15 12:05:39.977825] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:50.095 [2024-07-15 12:05:39.977828] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:50.095 [2024-07-15 12:05:39.977832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:50.095 [2024-07-15 12:05:39.977839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:50.095 [2024-07-15 12:05:39.977847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:50.095 [2024-07-15 12:05:39.977859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:50.095 [2024-07-15 12:05:39.977870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.095 [2024-07-15 12:05:39.977878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.095 [2024-07-15 12:05:39.977889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.095 [2024-07-15 12:05:39.977896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.095 [2024-07-15 12:05:39.977900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:50.095 [2024-07-15 12:05:39.977908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.977916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.977925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.977930] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:50.096 [2024-07-15 12:05:39.977934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.977940] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.977945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.977953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.977961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978022] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:50.096 [2024-07-15 12:05:39.978025] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:50.096 [2024-07-15 12:05:39.978031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978055] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:50.096 [2024-07-15 12:05:39.978062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978074] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:50.096 [2024-07-15 12:05:39.978078] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.096 [2024-07-15 12:05:39.978084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978130] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:50.096 [2024-07-15 12:05:39.978134] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.096 [2024-07-15 12:05:39.978140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978188] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:50.096 [2024-07-15 12:05:39.978192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:50.096 [2024-07-15 12:05:39.978196] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:50.096 [2024-07-15 12:05:39.978212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978237] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978297] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:50.096 [2024-07-15 12:05:39.978301] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:50.096 [2024-07-15 12:05:39.978304] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:50.096 [2024-07-15 12:05:39.978309] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:50.096 [2024-07-15 12:05:39.978314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:50.096 [2024-07-15 12:05:39.978321] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:50.096 [2024-07-15 12:05:39.978324] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:50.096 [2024-07-15 12:05:39.978330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978335] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:50.096 [2024-07-15 12:05:39.978339] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.096 [2024-07-15 12:05:39.978344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978351] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:50.096 [2024-07-15 12:05:39.978354] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:50.096 [2024-07-15 12:05:39.978360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:50.096 [2024-07-15 12:05:39.978366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:50.096 [2024-07-15 12:05:39.978392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:50.096 ===================================================== 00:15:50.096 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:50.096 ===================================================== 00:15:50.096 Controller Capabilities/Features 00:15:50.096 ================================ 00:15:50.096 Vendor ID: 4e58 00:15:50.096 Subsystem Vendor ID: 4e58 00:15:50.096 Serial Number: SPDK1 00:15:50.096 Model Number: SPDK bdev Controller 00:15:50.096 Firmware Version: 24.09 00:15:50.096 Recommended Arb Burst: 6 00:15:50.096 IEEE OUI Identifier: 8d 6b 50 00:15:50.096 Multi-path I/O 00:15:50.096 May have multiple subsystem ports: Yes 00:15:50.096 May have multiple controllers: Yes 00:15:50.096 Associated with SR-IOV VF: No 00:15:50.096 Max Data Transfer Size: 131072 00:15:50.096 Max Number of Namespaces: 32 00:15:50.096 Max Number of I/O Queues: 127 00:15:50.096 NVMe Specification Version (VS): 1.3 00:15:50.096 NVMe Specification Version (Identify): 1.3 00:15:50.096 Maximum Queue Entries: 256 00:15:50.096 Contiguous Queues Required: Yes 00:15:50.096 Arbitration Mechanisms Supported 00:15:50.096 Weighted Round Robin: Not Supported 00:15:50.096 Vendor Specific: Not Supported 00:15:50.096 Reset Timeout: 15000 ms 00:15:50.096 Doorbell Stride: 4 bytes 00:15:50.096 NVM Subsystem Reset: Not Supported 00:15:50.096 Command Sets Supported 00:15:50.096 NVM Command Set: Supported 00:15:50.096 Boot Partition: Not Supported 00:15:50.096 Memory Page Size Minimum: 4096 bytes 00:15:50.096 Memory Page Size Maximum: 4096 bytes 00:15:50.096 Persistent Memory Region: Not Supported 00:15:50.096 Optional Asynchronous Events Supported 00:15:50.096 Namespace Attribute Notices: Supported 00:15:50.096 Firmware Activation Notices: Not Supported 00:15:50.096 ANA Change Notices: Not Supported 00:15:50.096 PLE Aggregate Log Change Notices: Not Supported 00:15:50.096 LBA Status Info Alert Notices: Not Supported 00:15:50.096 EGE Aggregate Log Change Notices: Not Supported 00:15:50.096 Normal NVM Subsystem Shutdown event: Not Supported 00:15:50.096 Zone Descriptor Change Notices: Not Supported 00:15:50.096 Discovery Log Change Notices: Not Supported 00:15:50.096 Controller Attributes 00:15:50.096 128-bit Host Identifier: Supported 00:15:50.096 Non-Operational Permissive Mode: Not Supported 00:15:50.096 NVM Sets: Not Supported 00:15:50.096 Read Recovery Levels: Not Supported 00:15:50.096 Endurance Groups: Not Supported 00:15:50.096 Predictable Latency Mode: Not Supported 00:15:50.096 Traffic Based Keep ALive: Not Supported 00:15:50.096 Namespace Granularity: Not Supported 00:15:50.096 SQ Associations: Not Supported 00:15:50.096 UUID List: Not Supported 00:15:50.096 Multi-Domain Subsystem: Not Supported 00:15:50.097 Fixed Capacity Management: Not Supported 00:15:50.097 Variable Capacity Management: Not Supported 00:15:50.097 Delete Endurance Group: Not Supported 00:15:50.097 Delete NVM Set: Not Supported 00:15:50.097 Extended LBA Formats Supported: Not Supported 00:15:50.097 Flexible Data Placement Supported: Not Supported 00:15:50.097 00:15:50.097 Controller Memory Buffer Support 00:15:50.097 ================================ 00:15:50.097 Supported: No 00:15:50.097 00:15:50.097 Persistent Memory Region Support 00:15:50.097 ================================ 00:15:50.097 Supported: No 00:15:50.097 00:15:50.097 Admin Command Set Attributes 00:15:50.097 ============================ 00:15:50.097 Security Send/Receive: Not Supported 00:15:50.097 Format NVM: Not Supported 00:15:50.097 Firmware Activate/Download: Not Supported 00:15:50.097 Namespace Management: Not Supported 00:15:50.097 Device Self-Test: Not Supported 00:15:50.097 Directives: Not Supported 00:15:50.097 NVMe-MI: Not Supported 00:15:50.097 Virtualization Management: Not Supported 00:15:50.097 Doorbell Buffer Config: Not Supported 00:15:50.097 Get LBA Status Capability: Not Supported 00:15:50.097 Command & Feature Lockdown Capability: Not Supported 00:15:50.097 Abort Command Limit: 4 00:15:50.097 Async Event Request Limit: 4 00:15:50.097 Number of Firmware Slots: N/A 00:15:50.097 Firmware Slot 1 Read-Only: N/A 00:15:50.097 Firmware Activation Without Reset: N/A 00:15:50.097 Multiple Update Detection Support: N/A 00:15:50.097 Firmware Update Granularity: No Information Provided 00:15:50.097 Per-Namespace SMART Log: No 00:15:50.097 Asymmetric Namespace Access Log Page: Not Supported 00:15:50.097 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:50.097 Command Effects Log Page: Supported 00:15:50.097 Get Log Page Extended Data: Supported 00:15:50.097 Telemetry Log Pages: Not Supported 00:15:50.097 Persistent Event Log Pages: Not Supported 00:15:50.097 Supported Log Pages Log Page: May Support 00:15:50.097 Commands Supported & Effects Log Page: Not Supported 00:15:50.097 Feature Identifiers & Effects Log Page:May Support 00:15:50.097 NVMe-MI Commands & Effects Log Page: May Support 00:15:50.097 Data Area 4 for Telemetry Log: Not Supported 00:15:50.097 Error Log Page Entries Supported: 128 00:15:50.097 Keep Alive: Supported 00:15:50.097 Keep Alive Granularity: 10000 ms 00:15:50.097 00:15:50.097 NVM Command Set Attributes 00:15:50.097 ========================== 00:15:50.097 Submission Queue Entry Size 00:15:50.097 Max: 64 00:15:50.097 Min: 64 00:15:50.097 Completion Queue Entry Size 00:15:50.097 Max: 16 00:15:50.097 Min: 16 00:15:50.097 Number of Namespaces: 32 00:15:50.097 Compare Command: Supported 00:15:50.097 Write Uncorrectable Command: Not Supported 00:15:50.097 Dataset Management Command: Supported 00:15:50.097 Write Zeroes Command: Supported 00:15:50.097 Set Features Save Field: Not Supported 00:15:50.097 Reservations: Not Supported 00:15:50.097 Timestamp: Not Supported 00:15:50.097 Copy: Supported 00:15:50.097 Volatile Write Cache: Present 00:15:50.097 Atomic Write Unit (Normal): 1 00:15:50.097 Atomic Write Unit (PFail): 1 00:15:50.097 Atomic Compare & Write Unit: 1 00:15:50.097 Fused Compare & Write: Supported 00:15:50.097 Scatter-Gather List 00:15:50.097 SGL Command Set: Supported (Dword aligned) 00:15:50.097 SGL Keyed: Not Supported 00:15:50.097 SGL Bit Bucket Descriptor: Not Supported 00:15:50.097 SGL Metadata Pointer: Not Supported 00:15:50.097 Oversized SGL: Not Supported 00:15:50.097 SGL Metadata Address: Not Supported 00:15:50.097 SGL Offset: Not Supported 00:15:50.097 Transport SGL Data Block: Not Supported 00:15:50.097 Replay Protected Memory Block: Not Supported 00:15:50.097 00:15:50.097 Firmware Slot Information 00:15:50.097 ========================= 00:15:50.097 Active slot: 1 00:15:50.097 Slot 1 Firmware Revision: 24.09 00:15:50.097 00:15:50.097 00:15:50.097 Commands Supported and Effects 00:15:50.097 ============================== 00:15:50.097 Admin Commands 00:15:50.097 -------------- 00:15:50.097 Get Log Page (02h): Supported 00:15:50.097 Identify (06h): Supported 00:15:50.097 Abort (08h): Supported 00:15:50.097 Set Features (09h): Supported 00:15:50.097 Get Features (0Ah): Supported 00:15:50.097 Asynchronous Event Request (0Ch): Supported 00:15:50.097 Keep Alive (18h): Supported 00:15:50.097 I/O Commands 00:15:50.097 ------------ 00:15:50.097 Flush (00h): Supported LBA-Change 00:15:50.097 Write (01h): Supported LBA-Change 00:15:50.097 Read (02h): Supported 00:15:50.097 Compare (05h): Supported 00:15:50.097 Write Zeroes (08h): Supported LBA-Change 00:15:50.097 Dataset Management (09h): Supported LBA-Change 00:15:50.097 Copy (19h): Supported LBA-Change 00:15:50.097 00:15:50.097 Error Log 00:15:50.097 ========= 00:15:50.097 00:15:50.097 Arbitration 00:15:50.097 =========== 00:15:50.097 Arbitration Burst: 1 00:15:50.097 00:15:50.097 Power Management 00:15:50.097 ================ 00:15:50.097 Number of Power States: 1 00:15:50.097 Current Power State: Power State #0 00:15:50.097 Power State #0: 00:15:50.097 Max Power: 0.00 W 00:15:50.097 Non-Operational State: Operational 00:15:50.097 Entry Latency: Not Reported 00:15:50.097 Exit Latency: Not Reported 00:15:50.097 Relative Read Throughput: 0 00:15:50.097 Relative Read Latency: 0 00:15:50.097 Relative Write Throughput: 0 00:15:50.097 Relative Write Latency: 0 00:15:50.097 Idle Power: Not Reported 00:15:50.097 Active Power: Not Reported 00:15:50.097 Non-Operational Permissive Mode: Not Supported 00:15:50.097 00:15:50.097 Health Information 00:15:50.097 ================== 00:15:50.097 Critical Warnings: 00:15:50.097 Available Spare Space: OK 00:15:50.097 Temperature: OK 00:15:50.097 Device Reliability: OK 00:15:50.097 Read Only: No 00:15:50.097 Volatile Memory Backup: OK 00:15:50.097 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:50.097 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:50.097 Available Spare: 0% 00:15:50.097 Available Sp[2024-07-15 12:05:39.978479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:50.097 [2024-07-15 12:05:39.978486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:50.097 [2024-07-15 12:05:39.978511] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:50.097 [2024-07-15 12:05:39.978519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.097 [2024-07-15 12:05:39.978525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.097 [2024-07-15 12:05:39.978530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.097 [2024-07-15 12:05:39.978536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.097 [2024-07-15 12:05:39.981232] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:50.097 [2024-07-15 12:05:39.981243] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:50.097 [2024-07-15 12:05:39.981718] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:50.097 [2024-07-15 12:05:39.981768] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:50.097 [2024-07-15 12:05:39.981775] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:50.097 [2024-07-15 12:05:39.982725] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:50.097 [2024-07-15 12:05:39.982735] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:50.097 [2024-07-15 12:05:39.982781] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:50.097 [2024-07-15 12:05:39.984753] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:50.097 are Threshold: 0% 00:15:50.097 Life Percentage Used: 0% 00:15:50.097 Data Units Read: 0 00:15:50.097 Data Units Written: 0 00:15:50.097 Host Read Commands: 0 00:15:50.097 Host Write Commands: 0 00:15:50.097 Controller Busy Time: 0 minutes 00:15:50.097 Power Cycles: 0 00:15:50.097 Power On Hours: 0 hours 00:15:50.097 Unsafe Shutdowns: 0 00:15:50.097 Unrecoverable Media Errors: 0 00:15:50.097 Lifetime Error Log Entries: 0 00:15:50.097 Warning Temperature Time: 0 minutes 00:15:50.097 Critical Temperature Time: 0 minutes 00:15:50.097 00:15:50.097 Number of Queues 00:15:50.097 ================ 00:15:50.097 Number of I/O Submission Queues: 127 00:15:50.097 Number of I/O Completion Queues: 127 00:15:50.097 00:15:50.097 Active Namespaces 00:15:50.097 ================= 00:15:50.097 Namespace ID:1 00:15:50.097 Error Recovery Timeout: Unlimited 00:15:50.097 Command Set Identifier: NVM (00h) 00:15:50.097 Deallocate: Supported 00:15:50.097 Deallocated/Unwritten Error: Not Supported 00:15:50.097 Deallocated Read Value: Unknown 00:15:50.097 Deallocate in Write Zeroes: Not Supported 00:15:50.097 Deallocated Guard Field: 0xFFFF 00:15:50.097 Flush: Supported 00:15:50.097 Reservation: Supported 00:15:50.097 Namespace Sharing Capabilities: Multiple Controllers 00:15:50.097 Size (in LBAs): 131072 (0GiB) 00:15:50.098 Capacity (in LBAs): 131072 (0GiB) 00:15:50.098 Utilization (in LBAs): 131072 (0GiB) 00:15:50.098 NGUID: 4E9196D1CB93437E9D7A6439F8BA57E0 00:15:50.098 UUID: 4e9196d1-cb93-437e-9d7a-6439f8ba57e0 00:15:50.098 Thin Provisioning: Not Supported 00:15:50.098 Per-NS Atomic Units: Yes 00:15:50.098 Atomic Boundary Size (Normal): 0 00:15:50.098 Atomic Boundary Size (PFail): 0 00:15:50.098 Atomic Boundary Offset: 0 00:15:50.098 Maximum Single Source Range Length: 65535 00:15:50.098 Maximum Copy Length: 65535 00:15:50.098 Maximum Source Range Count: 1 00:15:50.098 NGUID/EUI64 Never Reused: No 00:15:50.098 Namespace Write Protected: No 00:15:50.098 Number of LBA Formats: 1 00:15:50.098 Current LBA Format: LBA Format #00 00:15:50.098 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:50.098 00:15:50.098 12:05:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:50.098 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.356 [2024-07-15 12:05:40.199101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:55.629 Initializing NVMe Controllers 00:15:55.629 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:55.629 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:55.629 Initialization complete. Launching workers. 00:15:55.629 ======================================================== 00:15:55.629 Latency(us) 00:15:55.629 Device Information : IOPS MiB/s Average min max 00:15:55.629 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39929.77 155.98 3205.23 938.74 6677.97 00:15:55.629 ======================================================== 00:15:55.629 Total : 39929.77 155.98 3205.23 938.74 6677.97 00:15:55.629 00:15:55.629 [2024-07-15 12:05:45.216129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:55.629 12:05:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:55.629 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.629 [2024-07-15 12:05:45.443208] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:00.895 Initializing NVMe Controllers 00:16:00.895 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:00.895 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:00.895 Initialization complete. Launching workers. 00:16:00.895 ======================================================== 00:16:00.895 Latency(us) 00:16:00.895 Device Information : IOPS MiB/s Average min max 00:16:00.895 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.02 62.70 7979.95 6980.23 8993.02 00:16:00.895 ======================================================== 00:16:00.895 Total : 16051.02 62.70 7979.95 6980.23 8993.02 00:16:00.895 00:16:00.895 [2024-07-15 12:05:50.487432] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:00.895 12:05:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:00.895 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.895 [2024-07-15 12:05:50.685381] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:06.228 [2024-07-15 12:05:55.749501] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:06.228 Initializing NVMe Controllers 00:16:06.228 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:06.228 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:06.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:06.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:06.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:06.228 Initialization complete. Launching workers. 00:16:06.228 Starting thread on core 2 00:16:06.228 Starting thread on core 3 00:16:06.228 Starting thread on core 1 00:16:06.228 12:05:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:06.228 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.228 [2024-07-15 12:05:56.028630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:09.535 [2024-07-15 12:05:59.093726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:09.535 Initializing NVMe Controllers 00:16:09.535 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.535 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.535 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:09.535 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:09.535 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:09.535 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:09.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:09.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:09.535 Initialization complete. Launching workers. 00:16:09.535 Starting thread on core 1 with urgent priority queue 00:16:09.535 Starting thread on core 2 with urgent priority queue 00:16:09.535 Starting thread on core 3 with urgent priority queue 00:16:09.535 Starting thread on core 0 with urgent priority queue 00:16:09.535 SPDK bdev Controller (SPDK1 ) core 0: 9676.67 IO/s 10.33 secs/100000 ios 00:16:09.535 SPDK bdev Controller (SPDK1 ) core 1: 7715.67 IO/s 12.96 secs/100000 ios 00:16:09.535 SPDK bdev Controller (SPDK1 ) core 2: 7577.00 IO/s 13.20 secs/100000 ios 00:16:09.535 SPDK bdev Controller (SPDK1 ) core 3: 7011.67 IO/s 14.26 secs/100000 ios 00:16:09.535 ======================================================== 00:16:09.535 00:16:09.535 12:05:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:09.535 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.535 [2024-07-15 12:05:59.375668] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:09.535 Initializing NVMe Controllers 00:16:09.535 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.535 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.535 Namespace ID: 1 size: 0GB 00:16:09.535 Initialization complete. 00:16:09.535 INFO: using host memory buffer for IO 00:16:09.535 Hello world! 00:16:09.535 [2024-07-15 12:05:59.408868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:09.535 12:05:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:09.535 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.793 [2024-07-15 12:05:59.678650] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:10.729 Initializing NVMe Controllers 00:16:10.729 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:10.729 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:10.729 Initialization complete. Launching workers. 00:16:10.729 submit (in ns) avg, min, max = 7600.4, 3215.7, 4003773.9 00:16:10.729 complete (in ns) avg, min, max = 19268.1, 1760.9, 4993720.0 00:16:10.729 00:16:10.729 Submit histogram 00:16:10.729 ================ 00:16:10.729 Range in us Cumulative Count 00:16:10.729 3.214 - 3.228: 0.0061% ( 1) 00:16:10.729 3.228 - 3.242: 0.0122% ( 1) 00:16:10.729 3.242 - 3.256: 0.0243% ( 2) 00:16:10.729 3.256 - 3.270: 0.0487% ( 4) 00:16:10.729 3.270 - 3.283: 0.1703% ( 20) 00:16:10.729 3.283 - 3.297: 1.3381% ( 192) 00:16:10.729 3.297 - 3.311: 5.2795% ( 648) 00:16:10.729 3.311 - 3.325: 10.9057% ( 925) 00:16:10.729 3.325 - 3.339: 17.0245% ( 1006) 00:16:10.729 3.339 - 3.353: 23.3927% ( 1047) 00:16:10.729 3.353 - 3.367: 29.7731% ( 1049) 00:16:10.729 3.367 - 3.381: 34.7667% ( 821) 00:16:10.729 3.381 - 3.395: 39.9854% ( 858) 00:16:10.729 3.395 - 3.409: 45.1493% ( 849) 00:16:10.729 3.409 - 3.423: 49.3948% ( 698) 00:16:10.729 3.423 - 3.437: 53.2693% ( 637) 00:16:10.729 3.437 - 3.450: 59.1692% ( 970) 00:16:10.729 3.450 - 3.464: 65.9875% ( 1121) 00:16:10.729 3.464 - 3.478: 69.9897% ( 658) 00:16:10.729 3.478 - 3.492: 75.0441% ( 831) 00:16:10.729 3.492 - 3.506: 80.0925% ( 830) 00:16:10.729 3.506 - 3.520: 83.1458% ( 502) 00:16:10.729 3.520 - 3.534: 85.0739% ( 317) 00:16:10.729 3.534 - 3.548: 86.2174% ( 188) 00:16:10.729 3.548 - 3.562: 86.7587% ( 89) 00:16:10.729 3.562 - 3.590: 87.5798% ( 135) 00:16:10.729 3.590 - 3.617: 88.9240% ( 221) 00:16:10.729 3.617 - 3.645: 90.7183% ( 295) 00:16:10.729 3.645 - 3.673: 92.3362% ( 266) 00:16:10.729 3.673 - 3.701: 93.9237% ( 261) 00:16:10.729 3.701 - 3.729: 95.6998% ( 292) 00:16:10.729 3.729 - 3.757: 97.2143% ( 249) 00:16:10.729 3.757 - 3.784: 98.1510% ( 154) 00:16:10.729 3.784 - 3.812: 98.7105% ( 92) 00:16:10.729 3.812 - 3.840: 99.0572% ( 57) 00:16:10.729 3.840 - 3.868: 99.2823% ( 37) 00:16:10.729 3.868 - 3.896: 99.3735% ( 15) 00:16:10.729 3.896 - 3.923: 99.4100% ( 6) 00:16:10.729 3.923 - 3.951: 99.4222% ( 2) 00:16:10.729 3.951 - 3.979: 99.4404% ( 3) 00:16:10.729 4.007 - 4.035: 99.4587% ( 3) 00:16:10.729 4.035 - 4.063: 99.4648% ( 1) 00:16:10.729 4.118 - 4.146: 99.4708% ( 1) 00:16:10.729 4.174 - 4.202: 99.4769% ( 1) 00:16:10.729 4.202 - 4.230: 99.4830% ( 1) 00:16:10.729 4.230 - 4.257: 99.4891% ( 1) 00:16:10.729 4.369 - 4.397: 99.5012% ( 2) 00:16:10.729 4.536 - 4.563: 99.5073% ( 1) 00:16:10.729 4.563 - 4.591: 99.5134% ( 1) 00:16:10.729 4.591 - 4.619: 99.5195% ( 1) 00:16:10.729 4.703 - 4.730: 99.5256% ( 1) 00:16:10.729 4.953 - 4.981: 99.5317% ( 1) 00:16:10.729 5.259 - 5.287: 99.5377% ( 1) 00:16:10.729 5.315 - 5.343: 99.5499% ( 2) 00:16:10.729 5.510 - 5.537: 99.5560% ( 1) 00:16:10.729 5.537 - 5.565: 99.5621% ( 1) 00:16:10.729 5.704 - 5.732: 99.5682% ( 1) 00:16:10.729 5.732 - 5.760: 99.5742% ( 1) 00:16:10.729 5.899 - 5.927: 99.5864% ( 2) 00:16:10.729 5.955 - 5.983: 99.5925% ( 1) 00:16:10.729 5.983 - 6.010: 99.5986% ( 1) 00:16:10.729 6.150 - 6.177: 99.6046% ( 1) 00:16:10.729 6.261 - 6.289: 99.6107% ( 1) 00:16:10.729 6.317 - 6.344: 99.6168% ( 1) 00:16:10.729 6.344 - 6.372: 99.6290% ( 2) 00:16:10.729 6.372 - 6.400: 99.6411% ( 2) 00:16:10.729 6.400 - 6.428: 99.6472% ( 1) 00:16:10.729 6.428 - 6.456: 99.6594% ( 2) 00:16:10.729 6.456 - 6.483: 99.6655% ( 1) 00:16:10.729 6.483 - 6.511: 99.6776% ( 2) 00:16:10.729 6.511 - 6.539: 99.6837% ( 1) 00:16:10.729 6.539 - 6.567: 99.6898% ( 1) 00:16:10.729 6.623 - 6.650: 99.6959% ( 1) 00:16:10.729 6.678 - 6.706: 99.7080% ( 2) 00:16:10.729 6.706 - 6.734: 99.7141% ( 1) 00:16:10.729 6.734 - 6.762: 99.7202% ( 1) 00:16:10.729 6.762 - 6.790: 99.7263% ( 1) 00:16:10.729 6.817 - 6.845: 99.7385% ( 2) 00:16:10.729 6.845 - 6.873: 99.7445% ( 1) 00:16:10.729 6.929 - 6.957: 99.7506% ( 1) 00:16:10.729 6.957 - 6.984: 99.7567% ( 1) 00:16:10.729 6.984 - 7.012: 99.7628% ( 1) 00:16:10.729 7.012 - 7.040: 99.7689% ( 1) 00:16:10.729 7.096 - 7.123: 99.7750% ( 1) 00:16:10.729 7.179 - 7.235: 99.7871% ( 2) 00:16:10.729 7.235 - 7.290: 99.7932% ( 1) 00:16:10.729 7.290 - 7.346: 99.7993% ( 1) 00:16:10.729 7.402 - 7.457: 99.8114% ( 2) 00:16:10.729 7.457 - 7.513: 99.8175% ( 1) 00:16:10.729 7.569 - 7.624: 99.8236% ( 1) 00:16:10.729 7.736 - 7.791: 99.8358% ( 2) 00:16:10.729 7.958 - 8.014: 99.8419% ( 1) 00:16:10.729 8.237 - 8.292: 99.8479% ( 1) 00:16:10.729 8.403 - 8.459: 99.8540% ( 1) 00:16:10.729 9.962 - 10.017: 99.8601% ( 1) 00:16:10.729 10.351 - 10.407: 99.8662% ( 1) 00:16:10.729 11.576 - 11.631: 99.8723% ( 1) 00:16:10.729 14.080 - 14.136: 99.8784% ( 1) 00:16:10.729 14.136 - 14.191: 99.8844% ( 1) 00:16:10.729 40.737 - 40.960: 99.8905% ( 1) 00:16:10.729 160.278 - 161.169: 99.8966% ( 1) 00:16:10.729 3989.148 - 4017.642: 100.0000% ( 17) 00:16:10.729 00:16:10.729 Complete histogram 00:16:10.729 ================== 00:16:10.729 Range in us Cumulative Count 00:16:10.729 1.760 - 1.767: 0.0426% ( 7) 00:16:10.729 1.767 - 1.774: 0.0912% ( 8) 00:16:10.729 1.774 - 1.781: 0.1216% ( 5) 00:16:10.729 1.781 - 1.795: 0.1460% ( 4) 00:16:10.729 1.795 - 1.809: 0.2250% ( 13) 00:16:10.729 1.809 - 1.823: 6.3013% ( 999) 00:16:10.729 1.823 - 1.837: 30.6612% ( 4005) 00:16:10.729 1.837 - 1.850: 37.2727% ( 1087) 00:16:10.729 1.850 - 1.864: 40.4842% ( 528) 00:16:10.729 1.864 - 1.878: 64.3756% ( 3928) 00:16:10.729 1.878 - 1.892: 88.1029% ( 3901) 00:16:10.729 1.892 - 1.906: 94.6050% ( 1069) 00:16:10.729 1.906 - 1.920: 96.4662% ( 306) 00:16:10.729 1.920 - 1.934: 97.0014% ( 88) 00:16:10.729 1.934 - 1.948: 97.6218% ( 102) 00:16:10.729 1.948 - 1.962: 98.3943% ( 127) 00:16:10.729 1.962 - 1.976: 98.7288% ( 55) 00:16:10.729 1.976 - 1.990: 98.8322% ( 17) 00:16:10.729 1.990 - 2.003: 98.8930% ( 10) 00:16:10.729 2.003 - 2.017: 99.0025% ( 18) 00:16:10.729 2.017 - 2.031: 99.0816% ( 13) 00:16:10.729 2.045 - 2.059: 99.0937% ( 2) 00:16:10.729 2.059 - 2.073: 99.1059% ( 2) 00:16:10.729 2.073 - 2.087: 99.1302% ( 4) 00:16:10.729 2.087 - 2.101: 99.1546% ( 4) 00:16:10.729 2.101 - 2.115: 99.1789% ( 4) 00:16:10.729 2.115 - 2.129: 99.2032% ( 4) 00:16:10.729 2.129 - 2.143: 99.2215% ( 3) 00:16:10.729 2.143 - 2.157: 99.2275% ( 1) 00:16:10.729 2.157 - 2.170: 99.2519% ( 4) 00:16:10.729 2.170 - 2.184: 99.2640% ( 2) 00:16:10.729 2.184 - 2.198: 99.2701% ( 1) 00:16:10.729 2.198 - 2.212: 99.2762% ( 1) 00:16:10.729 2.212 - 2.226: 99.2884% ( 2) 00:16:10.729 2.226 - 2.240: 99.2944% ( 1) 00:16:10.730 2.296 - 2.310: 99.3005% ( 1) 00:16:10.730 2.393 - 2.407: 99.3066% ( 1) 00:16:10.730 2.602 - 2.616: 99.3127% ( 1) 00:16:10.730 2.852 - 2.866: 99.3188% ( 1) 00:16:10.730 2.866 - 2.880: 99.3249% ( 1) 00:16:10.730 4.035 - 4.063: 99.3309% ( 1) 00:16:10.730 4.063 - 4.090: 99.3370% ( 1) 00:16:10.730 4.090 - 4.118: 99.3431% ( 1) 00:16:10.730 4.146 - 4.174: 99.3492% ( 1) 00:16:10.730 4.174 - 4.202: 99.3553% ( 1) 00:16:10.730 4.313 - 4.341: 99.3614% ( 1) 00:16:10.730 4.369 - 4.397: 99.3674% ( 1) 00:16:10.730 4.452 - 4.480: 99.3735% ( 1) 00:16:10.730 4.591 - 4.619: 99.3857% ( 2) 00:16:10.730 4.730 - 4.758: 99.3978% ( 2) 00:16:10.730 4.870 - 4.897: 99.4039% ( 1) 00:16:10.730 5.037 - 5.064: 99.4100% ( 1) 00:16:10.730 5.120 - 5.148: 99.4161% ( 1) 00:16:10.730 5.203 - 5.231: 99.4222% ( 1) 00:16:10.730 5.343 - 5.370: 99.4283% ( 1) 00:16:10.730 5.370 - 5.398: 99.4404% ( 2) 00:16:10.730 5.454 - 5.482: 99.4465% ( 1) 00:16:10.730 5.482 - 5.510: 99.4526% ( 1) 00:16:10.730 5.565 - 5.593: 99.4587% ( 1) 00:16:10.730 5.593 - 5.621: 99.4648% ( 1) 00:16:10.730 5.704 - 5.732: 99.4708% ( 1) 00:16:10.730 6.066 - 6.094: 99.4769% ( 1) 00:16:10.730 6.122 - 6.1[2024-07-15 12:06:00.699546] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:10.988 50: 99.4891% ( 2) 00:16:10.988 6.623 - 6.650: 99.4952% ( 1) 00:16:10.988 6.650 - 6.678: 99.5012% ( 1) 00:16:10.988 9.405 - 9.461: 99.5073% ( 1) 00:16:10.988 9.517 - 9.572: 99.5134% ( 1) 00:16:10.988 9.962 - 10.017: 99.5195% ( 1) 00:16:10.988 12.077 - 12.132: 99.5256% ( 1) 00:16:10.988 13.802 - 13.857: 99.5317% ( 1) 00:16:10.988 15.583 - 15.694: 99.5377% ( 1) 00:16:10.988 16.362 - 16.473: 99.5438% ( 1) 00:16:10.988 32.501 - 32.723: 99.5499% ( 1) 00:16:10.988 40.960 - 41.183: 99.5560% ( 1) 00:16:10.988 1638.400 - 1645.523: 99.5621% ( 1) 00:16:10.988 2194.031 - 2208.278: 99.5682% ( 1) 00:16:10.988 2564.452 - 2578.699: 99.5742% ( 1) 00:16:10.988 3305.294 - 3319.541: 99.5803% ( 1) 00:16:10.988 3504.751 - 3518.998: 99.5864% ( 1) 00:16:10.988 3989.148 - 4017.642: 99.9878% ( 66) 00:16:10.988 4017.642 - 4046.136: 99.9939% ( 1) 00:16:10.988 4986.435 - 5014.929: 100.0000% ( 1) 00:16:10.988 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:10.988 [ 00:16:10.988 { 00:16:10.988 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:10.988 "subtype": "Discovery", 00:16:10.988 "listen_addresses": [], 00:16:10.988 "allow_any_host": true, 00:16:10.988 "hosts": [] 00:16:10.988 }, 00:16:10.988 { 00:16:10.988 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:10.988 "subtype": "NVMe", 00:16:10.988 "listen_addresses": [ 00:16:10.988 { 00:16:10.988 "trtype": "VFIOUSER", 00:16:10.988 "adrfam": "IPv4", 00:16:10.988 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:10.988 "trsvcid": "0" 00:16:10.988 } 00:16:10.988 ], 00:16:10.988 "allow_any_host": true, 00:16:10.988 "hosts": [], 00:16:10.988 "serial_number": "SPDK1", 00:16:10.988 "model_number": "SPDK bdev Controller", 00:16:10.988 "max_namespaces": 32, 00:16:10.988 "min_cntlid": 1, 00:16:10.988 "max_cntlid": 65519, 00:16:10.988 "namespaces": [ 00:16:10.988 { 00:16:10.988 "nsid": 1, 00:16:10.988 "bdev_name": "Malloc1", 00:16:10.988 "name": "Malloc1", 00:16:10.988 "nguid": "4E9196D1CB93437E9D7A6439F8BA57E0", 00:16:10.988 "uuid": "4e9196d1-cb93-437e-9d7a-6439f8ba57e0" 00:16:10.988 } 00:16:10.988 ] 00:16:10.988 }, 00:16:10.988 { 00:16:10.988 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:10.988 "subtype": "NVMe", 00:16:10.988 "listen_addresses": [ 00:16:10.988 { 00:16:10.988 "trtype": "VFIOUSER", 00:16:10.988 "adrfam": "IPv4", 00:16:10.988 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:10.988 "trsvcid": "0" 00:16:10.988 } 00:16:10.988 ], 00:16:10.988 "allow_any_host": true, 00:16:10.988 "hosts": [], 00:16:10.988 "serial_number": "SPDK2", 00:16:10.988 "model_number": "SPDK bdev Controller", 00:16:10.988 "max_namespaces": 32, 00:16:10.988 "min_cntlid": 1, 00:16:10.988 "max_cntlid": 65519, 00:16:10.988 "namespaces": [ 00:16:10.988 { 00:16:10.988 "nsid": 1, 00:16:10.988 "bdev_name": "Malloc2", 00:16:10.988 "name": "Malloc2", 00:16:10.988 "nguid": "636DDF0D1AA64B148B79CA04F621E08D", 00:16:10.988 "uuid": "636ddf0d-1aa6-4b14-8b79-ca04f621e08d" 00:16:10.988 } 00:16:10.988 ] 00:16:10.988 } 00:16:10.988 ] 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1087074 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:10.988 12:06:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:10.988 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.247 [2024-07-15 12:06:01.061609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:11.247 Malloc3 00:16:11.247 12:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:11.505 [2024-07-15 12:06:01.303422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:11.505 12:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:11.505 Asynchronous Event Request test 00:16:11.505 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:11.505 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:11.505 Registering asynchronous event callbacks... 00:16:11.505 Starting namespace attribute notice tests for all controllers... 00:16:11.505 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:11.505 aer_cb - Changed Namespace 00:16:11.505 Cleaning up... 00:16:11.505 [ 00:16:11.506 { 00:16:11.506 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:11.506 "subtype": "Discovery", 00:16:11.506 "listen_addresses": [], 00:16:11.506 "allow_any_host": true, 00:16:11.506 "hosts": [] 00:16:11.506 }, 00:16:11.506 { 00:16:11.506 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:11.506 "subtype": "NVMe", 00:16:11.506 "listen_addresses": [ 00:16:11.506 { 00:16:11.506 "trtype": "VFIOUSER", 00:16:11.506 "adrfam": "IPv4", 00:16:11.506 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:11.506 "trsvcid": "0" 00:16:11.506 } 00:16:11.506 ], 00:16:11.506 "allow_any_host": true, 00:16:11.506 "hosts": [], 00:16:11.506 "serial_number": "SPDK1", 00:16:11.506 "model_number": "SPDK bdev Controller", 00:16:11.506 "max_namespaces": 32, 00:16:11.506 "min_cntlid": 1, 00:16:11.506 "max_cntlid": 65519, 00:16:11.506 "namespaces": [ 00:16:11.506 { 00:16:11.506 "nsid": 1, 00:16:11.506 "bdev_name": "Malloc1", 00:16:11.506 "name": "Malloc1", 00:16:11.506 "nguid": "4E9196D1CB93437E9D7A6439F8BA57E0", 00:16:11.506 "uuid": "4e9196d1-cb93-437e-9d7a-6439f8ba57e0" 00:16:11.506 }, 00:16:11.506 { 00:16:11.506 "nsid": 2, 00:16:11.506 "bdev_name": "Malloc3", 00:16:11.506 "name": "Malloc3", 00:16:11.506 "nguid": "B85854CEEF404ABCA4D3977A3483A58D", 00:16:11.506 "uuid": "b85854ce-ef40-4abc-a4d3-977a3483a58d" 00:16:11.506 } 00:16:11.506 ] 00:16:11.506 }, 00:16:11.506 { 00:16:11.506 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:11.506 "subtype": "NVMe", 00:16:11.506 "listen_addresses": [ 00:16:11.506 { 00:16:11.506 "trtype": "VFIOUSER", 00:16:11.506 "adrfam": "IPv4", 00:16:11.506 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:11.506 "trsvcid": "0" 00:16:11.506 } 00:16:11.506 ], 00:16:11.506 "allow_any_host": true, 00:16:11.506 "hosts": [], 00:16:11.506 "serial_number": "SPDK2", 00:16:11.506 "model_number": "SPDK bdev Controller", 00:16:11.506 "max_namespaces": 32, 00:16:11.506 "min_cntlid": 1, 00:16:11.506 "max_cntlid": 65519, 00:16:11.506 "namespaces": [ 00:16:11.506 { 00:16:11.506 "nsid": 1, 00:16:11.506 "bdev_name": "Malloc2", 00:16:11.506 "name": "Malloc2", 00:16:11.506 "nguid": "636DDF0D1AA64B148B79CA04F621E08D", 00:16:11.506 "uuid": "636ddf0d-1aa6-4b14-8b79-ca04f621e08d" 00:16:11.506 } 00:16:11.506 ] 00:16:11.506 } 00:16:11.506 ] 00:16:11.766 12:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1087074 00:16:11.766 12:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:11.766 12:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:11.766 12:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:11.766 12:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:11.766 [2024-07-15 12:06:01.541475] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:16:11.766 [2024-07-15 12:06:01.541509] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087331 ] 00:16:11.766 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.766 [2024-07-15 12:06:01.569594] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:11.766 [2024-07-15 12:06:01.579463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:11.766 [2024-07-15 12:06:01.579486] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4766de7000 00:16:11.766 [2024-07-15 12:06:01.580457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.766 [2024-07-15 12:06:01.581465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.766 [2024-07-15 12:06:01.582472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.766 [2024-07-15 12:06:01.583479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:11.766 [2024-07-15 12:06:01.584483] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:11.766 [2024-07-15 12:06:01.585491] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.766 [2024-07-15 12:06:01.586494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:11.766 [2024-07-15 12:06:01.587494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:11.766 [2024-07-15 12:06:01.588504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:11.766 [2024-07-15 12:06:01.588514] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4765bad000 00:16:11.766 [2024-07-15 12:06:01.589454] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:11.766 [2024-07-15 12:06:01.601974] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:11.766 [2024-07-15 12:06:01.601996] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:11.766 [2024-07-15 12:06:01.604054] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:11.766 [2024-07-15 12:06:01.604093] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:11.766 [2024-07-15 12:06:01.604159] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:11.766 [2024-07-15 12:06:01.604173] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:11.766 [2024-07-15 12:06:01.604178] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:11.766 [2024-07-15 12:06:01.605060] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:11.766 [2024-07-15 12:06:01.605068] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:11.766 [2024-07-15 12:06:01.605074] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:11.766 [2024-07-15 12:06:01.606061] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:11.766 [2024-07-15 12:06:01.606069] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:11.766 [2024-07-15 12:06:01.606075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:11.766 [2024-07-15 12:06:01.607076] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:11.766 [2024-07-15 12:06:01.607084] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:11.766 [2024-07-15 12:06:01.608088] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:11.766 [2024-07-15 12:06:01.608096] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:11.766 [2024-07-15 12:06:01.608100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:11.766 [2024-07-15 12:06:01.608106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:11.766 [2024-07-15 12:06:01.608211] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:11.766 [2024-07-15 12:06:01.608215] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:11.766 [2024-07-15 12:06:01.608219] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:11.766 [2024-07-15 12:06:01.612230] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:11.766 [2024-07-15 12:06:01.613127] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:11.766 [2024-07-15 12:06:01.614134] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:11.766 [2024-07-15 12:06:01.615134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:11.766 [2024-07-15 12:06:01.615170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:11.767 [2024-07-15 12:06:01.616153] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:11.767 [2024-07-15 12:06:01.616161] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:11.767 [2024-07-15 12:06:01.616165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.616182] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:11.767 [2024-07-15 12:06:01.616188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.616199] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:11.767 [2024-07-15 12:06:01.616203] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:11.767 [2024-07-15 12:06:01.616213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:11.767 [2024-07-15 12:06:01.621595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:11.767 [2024-07-15 12:06:01.621607] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:11.767 [2024-07-15 12:06:01.621614] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:11.767 [2024-07-15 12:06:01.621618] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:11.767 [2024-07-15 12:06:01.621622] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:11.767 [2024-07-15 12:06:01.621626] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:11.767 [2024-07-15 12:06:01.621630] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:11.767 [2024-07-15 12:06:01.621634] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.621640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.621650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:11.767 [2024-07-15 12:06:01.630229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:11.767 [2024-07-15 12:06:01.630243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.767 [2024-07-15 12:06:01.630251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.767 [2024-07-15 12:06:01.630258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.767 [2024-07-15 12:06:01.630265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.767 [2024-07-15 12:06:01.630271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.630279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.630287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:11.767 [2024-07-15 12:06:01.638229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:11.767 [2024-07-15 12:06:01.638236] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:11.767 [2024-07-15 12:06:01.638241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.638247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.638252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.638260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:11.767 [2024-07-15 12:06:01.646231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:11.767 [2024-07-15 12:06:01.646283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.646290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.646296] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:11.767 [2024-07-15 12:06:01.646300] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:11.767 [2024-07-15 12:06:01.646307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:11.767 [2024-07-15 12:06:01.654232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:11.767 [2024-07-15 12:06:01.654242] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:11.767 [2024-07-15 12:06:01.654254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.654260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.654266] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:11.767 [2024-07-15 12:06:01.654271] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:11.767 [2024-07-15 12:06:01.654277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:11.767 [2024-07-15 12:06:01.662229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:11.767 [2024-07-15 12:06:01.662241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.662248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.662257] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:11.767 [2024-07-15 12:06:01.662261] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:11.767 [2024-07-15 12:06:01.662266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:11.767 [2024-07-15 12:06:01.670230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:11.767 [2024-07-15 12:06:01.670239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.670246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.670253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.670258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:11.767 [2024-07-15 12:06:01.670263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:11.768 [2024-07-15 12:06:01.670268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:11.768 [2024-07-15 12:06:01.670272] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:11.768 [2024-07-15 12:06:01.670276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:11.768 [2024-07-15 12:06:01.670280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:11.768 [2024-07-15 12:06:01.670295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:11.768 [2024-07-15 12:06:01.678231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:11.768 [2024-07-15 12:06:01.678244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:11.768 [2024-07-15 12:06:01.686229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:11.768 [2024-07-15 12:06:01.686243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:11.768 [2024-07-15 12:06:01.694230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:11.768 [2024-07-15 12:06:01.694242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:11.768 [2024-07-15 12:06:01.702232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:11.768 [2024-07-15 12:06:01.702249] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:11.768 [2024-07-15 12:06:01.702253] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:11.768 [2024-07-15 12:06:01.702256] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:11.768 [2024-07-15 12:06:01.702259] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:11.768 [2024-07-15 12:06:01.702265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:11.768 [2024-07-15 12:06:01.702273] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:11.768 [2024-07-15 12:06:01.702277] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:11.768 [2024-07-15 12:06:01.702283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:11.768 [2024-07-15 12:06:01.702289] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:11.768 [2024-07-15 12:06:01.702293] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:11.768 [2024-07-15 12:06:01.702298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:11.768 [2024-07-15 12:06:01.702304] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:11.768 [2024-07-15 12:06:01.702308] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:11.768 [2024-07-15 12:06:01.702313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:11.768 [2024-07-15 12:06:01.710230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:11.768 [2024-07-15 12:06:01.710244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:11.768 [2024-07-15 12:06:01.710253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:11.768 [2024-07-15 12:06:01.710259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:11.768 ===================================================== 00:16:11.768 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:11.768 ===================================================== 00:16:11.768 Controller Capabilities/Features 00:16:11.768 ================================ 00:16:11.768 Vendor ID: 4e58 00:16:11.768 Subsystem Vendor ID: 4e58 00:16:11.768 Serial Number: SPDK2 00:16:11.768 Model Number: SPDK bdev Controller 00:16:11.768 Firmware Version: 24.09 00:16:11.768 Recommended Arb Burst: 6 00:16:11.768 IEEE OUI Identifier: 8d 6b 50 00:16:11.768 Multi-path I/O 00:16:11.768 May have multiple subsystem ports: Yes 00:16:11.768 May have multiple controllers: Yes 00:16:11.768 Associated with SR-IOV VF: No 00:16:11.768 Max Data Transfer Size: 131072 00:16:11.768 Max Number of Namespaces: 32 00:16:11.768 Max Number of I/O Queues: 127 00:16:11.768 NVMe Specification Version (VS): 1.3 00:16:11.768 NVMe Specification Version (Identify): 1.3 00:16:11.768 Maximum Queue Entries: 256 00:16:11.768 Contiguous Queues Required: Yes 00:16:11.768 Arbitration Mechanisms Supported 00:16:11.768 Weighted Round Robin: Not Supported 00:16:11.768 Vendor Specific: Not Supported 00:16:11.768 Reset Timeout: 15000 ms 00:16:11.768 Doorbell Stride: 4 bytes 00:16:11.768 NVM Subsystem Reset: Not Supported 00:16:11.768 Command Sets Supported 00:16:11.768 NVM Command Set: Supported 00:16:11.768 Boot Partition: Not Supported 00:16:11.768 Memory Page Size Minimum: 4096 bytes 00:16:11.768 Memory Page Size Maximum: 4096 bytes 00:16:11.768 Persistent Memory Region: Not Supported 00:16:11.768 Optional Asynchronous Events Supported 00:16:11.768 Namespace Attribute Notices: Supported 00:16:11.768 Firmware Activation Notices: Not Supported 00:16:11.768 ANA Change Notices: Not Supported 00:16:11.768 PLE Aggregate Log Change Notices: Not Supported 00:16:11.768 LBA Status Info Alert Notices: Not Supported 00:16:11.768 EGE Aggregate Log Change Notices: Not Supported 00:16:11.768 Normal NVM Subsystem Shutdown event: Not Supported 00:16:11.768 Zone Descriptor Change Notices: Not Supported 00:16:11.768 Discovery Log Change Notices: Not Supported 00:16:11.768 Controller Attributes 00:16:11.768 128-bit Host Identifier: Supported 00:16:11.768 Non-Operational Permissive Mode: Not Supported 00:16:11.768 NVM Sets: Not Supported 00:16:11.768 Read Recovery Levels: Not Supported 00:16:11.768 Endurance Groups: Not Supported 00:16:11.768 Predictable Latency Mode: Not Supported 00:16:11.768 Traffic Based Keep ALive: Not Supported 00:16:11.768 Namespace Granularity: Not Supported 00:16:11.768 SQ Associations: Not Supported 00:16:11.768 UUID List: Not Supported 00:16:11.768 Multi-Domain Subsystem: Not Supported 00:16:11.768 Fixed Capacity Management: Not Supported 00:16:11.768 Variable Capacity Management: Not Supported 00:16:11.768 Delete Endurance Group: Not Supported 00:16:11.769 Delete NVM Set: Not Supported 00:16:11.769 Extended LBA Formats Supported: Not Supported 00:16:11.769 Flexible Data Placement Supported: Not Supported 00:16:11.769 00:16:11.769 Controller Memory Buffer Support 00:16:11.769 ================================ 00:16:11.769 Supported: No 00:16:11.769 00:16:11.769 Persistent Memory Region Support 00:16:11.769 ================================ 00:16:11.769 Supported: No 00:16:11.769 00:16:11.769 Admin Command Set Attributes 00:16:11.769 ============================ 00:16:11.769 Security Send/Receive: Not Supported 00:16:11.769 Format NVM: Not Supported 00:16:11.769 Firmware Activate/Download: Not Supported 00:16:11.769 Namespace Management: Not Supported 00:16:11.769 Device Self-Test: Not Supported 00:16:11.769 Directives: Not Supported 00:16:11.769 NVMe-MI: Not Supported 00:16:11.769 Virtualization Management: Not Supported 00:16:11.769 Doorbell Buffer Config: Not Supported 00:16:11.769 Get LBA Status Capability: Not Supported 00:16:11.769 Command & Feature Lockdown Capability: Not Supported 00:16:11.769 Abort Command Limit: 4 00:16:11.769 Async Event Request Limit: 4 00:16:11.769 Number of Firmware Slots: N/A 00:16:11.769 Firmware Slot 1 Read-Only: N/A 00:16:11.769 Firmware Activation Without Reset: N/A 00:16:11.769 Multiple Update Detection Support: N/A 00:16:11.769 Firmware Update Granularity: No Information Provided 00:16:11.769 Per-Namespace SMART Log: No 00:16:11.769 Asymmetric Namespace Access Log Page: Not Supported 00:16:11.769 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:11.769 Command Effects Log Page: Supported 00:16:11.769 Get Log Page Extended Data: Supported 00:16:11.769 Telemetry Log Pages: Not Supported 00:16:11.769 Persistent Event Log Pages: Not Supported 00:16:11.769 Supported Log Pages Log Page: May Support 00:16:11.769 Commands Supported & Effects Log Page: Not Supported 00:16:11.769 Feature Identifiers & Effects Log Page:May Support 00:16:11.769 NVMe-MI Commands & Effects Log Page: May Support 00:16:11.769 Data Area 4 for Telemetry Log: Not Supported 00:16:11.769 Error Log Page Entries Supported: 128 00:16:11.769 Keep Alive: Supported 00:16:11.769 Keep Alive Granularity: 10000 ms 00:16:11.769 00:16:11.769 NVM Command Set Attributes 00:16:11.769 ========================== 00:16:11.769 Submission Queue Entry Size 00:16:11.769 Max: 64 00:16:11.769 Min: 64 00:16:11.769 Completion Queue Entry Size 00:16:11.769 Max: 16 00:16:11.769 Min: 16 00:16:11.769 Number of Namespaces: 32 00:16:11.769 Compare Command: Supported 00:16:11.769 Write Uncorrectable Command: Not Supported 00:16:11.769 Dataset Management Command: Supported 00:16:11.769 Write Zeroes Command: Supported 00:16:11.769 Set Features Save Field: Not Supported 00:16:11.769 Reservations: Not Supported 00:16:11.769 Timestamp: Not Supported 00:16:11.769 Copy: Supported 00:16:11.769 Volatile Write Cache: Present 00:16:11.769 Atomic Write Unit (Normal): 1 00:16:11.769 Atomic Write Unit (PFail): 1 00:16:11.769 Atomic Compare & Write Unit: 1 00:16:11.769 Fused Compare & Write: Supported 00:16:11.769 Scatter-Gather List 00:16:11.769 SGL Command Set: Supported (Dword aligned) 00:16:11.769 SGL Keyed: Not Supported 00:16:11.769 SGL Bit Bucket Descriptor: Not Supported 00:16:11.769 SGL Metadata Pointer: Not Supported 00:16:11.769 Oversized SGL: Not Supported 00:16:11.769 SGL Metadata Address: Not Supported 00:16:11.769 SGL Offset: Not Supported 00:16:11.769 Transport SGL Data Block: Not Supported 00:16:11.769 Replay Protected Memory Block: Not Supported 00:16:11.769 00:16:11.769 Firmware Slot Information 00:16:11.769 ========================= 00:16:11.769 Active slot: 1 00:16:11.769 Slot 1 Firmware Revision: 24.09 00:16:11.769 00:16:11.769 00:16:11.769 Commands Supported and Effects 00:16:11.769 ============================== 00:16:11.769 Admin Commands 00:16:11.769 -------------- 00:16:11.769 Get Log Page (02h): Supported 00:16:11.769 Identify (06h): Supported 00:16:11.769 Abort (08h): Supported 00:16:11.769 Set Features (09h): Supported 00:16:11.769 Get Features (0Ah): Supported 00:16:11.769 Asynchronous Event Request (0Ch): Supported 00:16:11.769 Keep Alive (18h): Supported 00:16:11.769 I/O Commands 00:16:11.769 ------------ 00:16:11.769 Flush (00h): Supported LBA-Change 00:16:11.769 Write (01h): Supported LBA-Change 00:16:11.769 Read (02h): Supported 00:16:11.769 Compare (05h): Supported 00:16:11.769 Write Zeroes (08h): Supported LBA-Change 00:16:11.769 Dataset Management (09h): Supported LBA-Change 00:16:11.769 Copy (19h): Supported LBA-Change 00:16:11.769 00:16:11.769 Error Log 00:16:11.769 ========= 00:16:11.769 00:16:11.769 Arbitration 00:16:11.769 =========== 00:16:11.769 Arbitration Burst: 1 00:16:11.769 00:16:11.769 Power Management 00:16:11.769 ================ 00:16:11.769 Number of Power States: 1 00:16:11.769 Current Power State: Power State #0 00:16:11.769 Power State #0: 00:16:11.769 Max Power: 0.00 W 00:16:11.769 Non-Operational State: Operational 00:16:11.769 Entry Latency: Not Reported 00:16:11.769 Exit Latency: Not Reported 00:16:11.769 Relative Read Throughput: 0 00:16:11.769 Relative Read Latency: 0 00:16:11.769 Relative Write Throughput: 0 00:16:11.769 Relative Write Latency: 0 00:16:11.769 Idle Power: Not Reported 00:16:11.769 Active Power: Not Reported 00:16:11.770 Non-Operational Permissive Mode: Not Supported 00:16:11.770 00:16:11.770 Health Information 00:16:11.770 ================== 00:16:11.770 Critical Warnings: 00:16:11.770 Available Spare Space: OK 00:16:11.770 Temperature: OK 00:16:11.770 Device Reliability: OK 00:16:11.770 Read Only: No 00:16:11.770 Volatile Memory Backup: OK 00:16:11.770 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:11.770 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:11.770 Available Spare: 0% 00:16:11.770 Available Sp[2024-07-15 12:06:01.710344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:11.770 [2024-07-15 12:06:01.718230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:11.770 [2024-07-15 12:06:01.718262] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:11.770 [2024-07-15 12:06:01.718270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.770 [2024-07-15 12:06:01.718276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.770 [2024-07-15 12:06:01.718281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.770 [2024-07-15 12:06:01.718287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.770 [2024-07-15 12:06:01.718334] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:11.770 [2024-07-15 12:06:01.718343] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:11.770 [2024-07-15 12:06:01.719338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:11.770 [2024-07-15 12:06:01.719379] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:11.770 [2024-07-15 12:06:01.719386] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:11.770 [2024-07-15 12:06:01.720344] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:11.770 [2024-07-15 12:06:01.720355] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:11.770 [2024-07-15 12:06:01.720398] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:11.770 [2024-07-15 12:06:01.721380] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:11.770 are Threshold: 0% 00:16:11.770 Life Percentage Used: 0% 00:16:11.770 Data Units Read: 0 00:16:11.770 Data Units Written: 0 00:16:11.770 Host Read Commands: 0 00:16:11.770 Host Write Commands: 0 00:16:11.770 Controller Busy Time: 0 minutes 00:16:11.770 Power Cycles: 0 00:16:11.770 Power On Hours: 0 hours 00:16:11.770 Unsafe Shutdowns: 0 00:16:11.770 Unrecoverable Media Errors: 0 00:16:11.770 Lifetime Error Log Entries: 0 00:16:11.770 Warning Temperature Time: 0 minutes 00:16:11.770 Critical Temperature Time: 0 minutes 00:16:11.770 00:16:11.770 Number of Queues 00:16:11.770 ================ 00:16:11.770 Number of I/O Submission Queues: 127 00:16:11.770 Number of I/O Completion Queues: 127 00:16:11.770 00:16:11.770 Active Namespaces 00:16:11.770 ================= 00:16:11.770 Namespace ID:1 00:16:11.770 Error Recovery Timeout: Unlimited 00:16:11.770 Command Set Identifier: NVM (00h) 00:16:11.770 Deallocate: Supported 00:16:11.770 Deallocated/Unwritten Error: Not Supported 00:16:11.770 Deallocated Read Value: Unknown 00:16:11.770 Deallocate in Write Zeroes: Not Supported 00:16:11.770 Deallocated Guard Field: 0xFFFF 00:16:11.770 Flush: Supported 00:16:11.770 Reservation: Supported 00:16:11.770 Namespace Sharing Capabilities: Multiple Controllers 00:16:11.770 Size (in LBAs): 131072 (0GiB) 00:16:11.770 Capacity (in LBAs): 131072 (0GiB) 00:16:11.770 Utilization (in LBAs): 131072 (0GiB) 00:16:11.770 NGUID: 636DDF0D1AA64B148B79CA04F621E08D 00:16:11.770 UUID: 636ddf0d-1aa6-4b14-8b79-ca04f621e08d 00:16:11.770 Thin Provisioning: Not Supported 00:16:11.770 Per-NS Atomic Units: Yes 00:16:11.770 Atomic Boundary Size (Normal): 0 00:16:11.770 Atomic Boundary Size (PFail): 0 00:16:11.770 Atomic Boundary Offset: 0 00:16:11.770 Maximum Single Source Range Length: 65535 00:16:11.770 Maximum Copy Length: 65535 00:16:11.770 Maximum Source Range Count: 1 00:16:11.770 NGUID/EUI64 Never Reused: No 00:16:11.770 Namespace Write Protected: No 00:16:11.770 Number of LBA Formats: 1 00:16:11.770 Current LBA Format: LBA Format #00 00:16:11.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:11.770 00:16:11.770 12:06:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:12.029 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.029 [2024-07-15 12:06:01.938553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:17.296 Initializing NVMe Controllers 00:16:17.296 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:17.296 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:17.296 Initialization complete. Launching workers. 00:16:17.296 ======================================================== 00:16:17.296 Latency(us) 00:16:17.296 Device Information : IOPS MiB/s Average min max 00:16:17.296 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39942.65 156.03 3204.44 967.44 9380.69 00:16:17.296 ======================================================== 00:16:17.296 Total : 39942.65 156.03 3204.44 967.44 9380.69 00:16:17.296 00:16:17.296 [2024-07-15 12:06:07.045466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.296 12:06:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:17.296 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.296 [2024-07-15 12:06:07.270097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:22.566 Initializing NVMe Controllers 00:16:22.566 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:22.566 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:22.566 Initialization complete. Launching workers. 00:16:22.566 ======================================================== 00:16:22.566 Latency(us) 00:16:22.566 Device Information : IOPS MiB/s Average min max 00:16:22.566 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39904.97 155.88 3207.45 975.14 7547.78 00:16:22.566 ======================================================== 00:16:22.566 Total : 39904.97 155.88 3207.45 975.14 7547.78 00:16:22.566 00:16:22.566 [2024-07-15 12:06:12.292280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:22.566 12:06:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:22.566 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.566 [2024-07-15 12:06:12.490693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:27.833 [2024-07-15 12:06:17.628322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:27.833 Initializing NVMe Controllers 00:16:27.833 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:27.833 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:27.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:27.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:27.833 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:27.833 Initialization complete. Launching workers. 00:16:27.833 Starting thread on core 2 00:16:27.833 Starting thread on core 3 00:16:27.833 Starting thread on core 1 00:16:27.833 12:06:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:27.833 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.090 [2024-07-15 12:06:17.914678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:31.374 [2024-07-15 12:06:20.981339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:31.374 Initializing NVMe Controllers 00:16:31.374 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.374 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:31.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:31.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:31.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:31.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:31.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:31.374 Initialization complete. Launching workers. 00:16:31.374 Starting thread on core 1 with urgent priority queue 00:16:31.374 Starting thread on core 2 with urgent priority queue 00:16:31.374 Starting thread on core 3 with urgent priority queue 00:16:31.374 Starting thread on core 0 with urgent priority queue 00:16:31.374 SPDK bdev Controller (SPDK2 ) core 0: 8799.33 IO/s 11.36 secs/100000 ios 00:16:31.374 SPDK bdev Controller (SPDK2 ) core 1: 8279.00 IO/s 12.08 secs/100000 ios 00:16:31.374 SPDK bdev Controller (SPDK2 ) core 2: 8636.00 IO/s 11.58 secs/100000 ios 00:16:31.374 SPDK bdev Controller (SPDK2 ) core 3: 10726.00 IO/s 9.32 secs/100000 ios 00:16:31.374 ======================================================== 00:16:31.374 00:16:31.374 12:06:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:31.374 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.374 [2024-07-15 12:06:21.253617] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:31.374 Initializing NVMe Controllers 00:16:31.374 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.374 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.374 Namespace ID: 1 size: 0GB 00:16:31.374 Initialization complete. 00:16:31.374 INFO: using host memory buffer for IO 00:16:31.374 Hello world! 00:16:31.374 [2024-07-15 12:06:21.263698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:31.374 12:06:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:31.374 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.632 [2024-07-15 12:06:21.529117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:33.046 Initializing NVMe Controllers 00:16:33.046 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.046 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.046 Initialization complete. Launching workers. 00:16:33.046 submit (in ns) avg, min, max = 6212.6, 3278.3, 4004840.0 00:16:33.046 complete (in ns) avg, min, max = 23943.9, 1780.0, 4013677.4 00:16:33.046 00:16:33.046 Submit histogram 00:16:33.046 ================ 00:16:33.046 Range in us Cumulative Count 00:16:33.046 3.270 - 3.283: 0.0123% ( 2) 00:16:33.046 3.283 - 3.297: 0.0920% ( 13) 00:16:33.046 3.297 - 3.311: 0.2822% ( 31) 00:16:33.046 3.311 - 3.325: 0.6563% ( 61) 00:16:33.046 3.325 - 3.339: 1.0980% ( 72) 00:16:33.046 3.339 - 3.353: 1.7604% ( 108) 00:16:33.046 3.353 - 3.367: 3.6742% ( 312) 00:16:33.046 3.367 - 3.381: 7.9801% ( 702) 00:16:33.046 3.381 - 3.395: 13.5006% ( 900) 00:16:33.046 3.395 - 3.409: 18.9536% ( 889) 00:16:33.046 3.409 - 3.423: 24.8175% ( 956) 00:16:33.046 3.423 - 3.437: 30.4545% ( 919) 00:16:33.046 3.437 - 3.450: 35.9320% ( 893) 00:16:33.046 3.450 - 3.464: 41.9003% ( 973) 00:16:33.046 3.464 - 3.478: 47.5005% ( 913) 00:16:33.046 3.478 - 3.492: 51.9720% ( 729) 00:16:33.046 3.492 - 3.506: 56.1553% ( 682) 00:16:33.046 3.506 - 3.520: 61.4181% ( 858) 00:16:33.046 3.520 - 3.534: 67.3250% ( 963) 00:16:33.046 3.534 - 3.548: 71.6065% ( 698) 00:16:33.046 3.548 - 3.562: 75.8204% ( 687) 00:16:33.046 3.562 - 3.590: 83.4509% ( 1244) 00:16:33.046 3.590 - 3.617: 86.5485% ( 505) 00:16:33.046 3.617 - 3.645: 87.6710% ( 183) 00:16:33.046 3.645 - 3.673: 88.7137% ( 170) 00:16:33.046 3.673 - 3.701: 90.4312% ( 280) 00:16:33.046 3.701 - 3.729: 92.3327% ( 310) 00:16:33.046 3.729 - 3.757: 94.0870% ( 286) 00:16:33.046 3.757 - 3.784: 95.7247% ( 267) 00:16:33.046 3.784 - 3.812: 97.1600% ( 234) 00:16:33.046 3.812 - 3.840: 98.1537% ( 162) 00:16:33.046 3.840 - 3.868: 98.8959% ( 121) 00:16:33.046 3.868 - 3.896: 99.2455% ( 57) 00:16:33.046 3.896 - 3.923: 99.4480% ( 33) 00:16:33.046 3.923 - 3.951: 99.5277% ( 13) 00:16:33.046 3.951 - 3.979: 99.5829% ( 9) 00:16:33.046 4.007 - 4.035: 99.5890% ( 1) 00:16:33.046 4.090 - 4.118: 99.5952% ( 1) 00:16:33.046 4.118 - 4.146: 99.6074% ( 2) 00:16:33.046 4.758 - 4.786: 99.6136% ( 1) 00:16:33.046 4.870 - 4.897: 99.6197% ( 1) 00:16:33.046 5.037 - 5.064: 99.6320% ( 2) 00:16:33.046 5.120 - 5.148: 99.6381% ( 1) 00:16:33.046 5.148 - 5.176: 99.6442% ( 1) 00:16:33.046 5.203 - 5.231: 99.6504% ( 1) 00:16:33.046 5.231 - 5.259: 99.6565% ( 1) 00:16:33.046 5.370 - 5.398: 99.6688% ( 2) 00:16:33.046 5.426 - 5.454: 99.6749% ( 1) 00:16:33.046 5.482 - 5.510: 99.6872% ( 2) 00:16:33.046 5.510 - 5.537: 99.6933% ( 1) 00:16:33.046 5.537 - 5.565: 99.6994% ( 1) 00:16:33.046 5.565 - 5.593: 99.7056% ( 1) 00:16:33.046 5.732 - 5.760: 99.7117% ( 1) 00:16:33.046 5.760 - 5.788: 99.7178% ( 1) 00:16:33.046 5.816 - 5.843: 99.7301% ( 2) 00:16:33.046 5.871 - 5.899: 99.7362% ( 1) 00:16:33.046 5.899 - 5.927: 99.7424% ( 1) 00:16:33.046 5.955 - 5.983: 99.7485% ( 1) 00:16:33.046 5.983 - 6.010: 99.7546% ( 1) 00:16:33.046 6.010 - 6.038: 99.7608% ( 1) 00:16:33.046 6.094 - 6.122: 99.7669% ( 1) 00:16:33.047 6.150 - 6.177: 99.7730% ( 1) 00:16:33.047 6.177 - 6.205: 99.7792% ( 1) 00:16:33.047 6.205 - 6.233: 99.7853% ( 1) 00:16:33.047 6.261 - 6.289: 99.7914% ( 1) 00:16:33.047 6.372 - 6.400: 99.7976% ( 1) 00:16:33.047 6.428 - 6.456: 99.8037% ( 1) 00:16:33.047 6.539 - 6.567: 99.8099% ( 1) 00:16:33.047 6.595 - 6.623: 99.8160% ( 1) 00:16:33.047 6.623 - 6.650: 99.8221% ( 1) 00:16:33.047 6.706 - 6.734: 99.8283% ( 1) 00:16:33.047 6.984 - 7.012: 99.8344% ( 1) 00:16:33.047 7.123 - 7.179: 99.8405% ( 1) 00:16:33.047 7.179 - 7.235: 99.8467% ( 1) 00:16:33.047 7.290 - 7.346: 99.8651% ( 3) 00:16:33.047 7.346 - 7.402: 99.8773% ( 2) 00:16:33.047 7.569 - 7.624: 99.8957% ( 3) 00:16:33.047 7.624 - 7.680: 99.9080% ( 2) 00:16:33.047 7.736 - 7.791: 99.9141% ( 1) 00:16:33.047 [2024-07-15 12:06:22.626277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:33.047 8.070 - 8.125: 99.9203% ( 1) 00:16:33.047 8.237 - 8.292: 99.9264% ( 1) 00:16:33.047 13.913 - 13.969: 99.9325% ( 1) 00:16:33.047 3989.148 - 4017.642: 100.0000% ( 11) 00:16:33.047 00:16:33.047 Complete histogram 00:16:33.047 ================== 00:16:33.047 Range in us Cumulative Count 00:16:33.047 1.774 - 1.781: 0.0061% ( 1) 00:16:33.047 1.795 - 1.809: 0.0184% ( 2) 00:16:33.047 1.809 - 1.823: 0.0429% ( 4) 00:16:33.047 1.823 - 1.837: 0.8342% ( 129) 00:16:33.047 1.837 - 1.850: 2.4597% ( 265) 00:16:33.047 1.850 - 1.864: 3.6312% ( 191) 00:16:33.047 1.864 - 1.878: 4.9684% ( 218) 00:16:33.047 1.878 - 1.892: 36.5148% ( 5143) 00:16:33.047 1.892 - 1.906: 83.3405% ( 7634) 00:16:33.047 1.906 - 1.920: 92.3940% ( 1476) 00:16:33.047 1.920 - 1.934: 95.1972% ( 457) 00:16:33.047 1.934 - 1.948: 96.0498% ( 139) 00:16:33.047 1.948 - 1.962: 96.8227% ( 126) 00:16:33.047 1.962 - 1.976: 98.0801% ( 205) 00:16:33.047 1.976 - 1.990: 98.8836% ( 131) 00:16:33.047 1.990 - 2.003: 99.1106% ( 37) 00:16:33.047 2.003 - 2.017: 99.1658% ( 9) 00:16:33.047 2.017 - 2.031: 99.2271% ( 10) 00:16:33.047 2.031 - 2.045: 99.2639% ( 6) 00:16:33.047 2.045 - 2.059: 99.2762% ( 2) 00:16:33.047 2.073 - 2.087: 99.2885% ( 2) 00:16:33.047 2.143 - 2.157: 99.2946% ( 1) 00:16:33.047 2.254 - 2.268: 99.3007% ( 1) 00:16:33.047 3.492 - 3.506: 99.3069% ( 1) 00:16:33.047 3.562 - 3.590: 99.3130% ( 1) 00:16:33.047 3.645 - 3.673: 99.3191% ( 1) 00:16:33.047 3.701 - 3.729: 99.3253% ( 1) 00:16:33.047 4.007 - 4.035: 99.3314% ( 1) 00:16:33.047 4.035 - 4.063: 99.3437% ( 2) 00:16:33.047 4.174 - 4.202: 99.3682% ( 4) 00:16:33.047 4.285 - 4.313: 99.3743% ( 1) 00:16:33.047 4.341 - 4.369: 99.3805% ( 1) 00:16:33.047 4.369 - 4.397: 99.3866% ( 1) 00:16:33.047 4.452 - 4.480: 99.3927% ( 1) 00:16:33.047 4.786 - 4.814: 99.3989% ( 1) 00:16:33.047 5.148 - 5.176: 99.4050% ( 1) 00:16:33.047 5.231 - 5.259: 99.4112% ( 1) 00:16:33.047 5.537 - 5.565: 99.4173% ( 1) 00:16:33.047 6.150 - 6.177: 99.4234% ( 1) 00:16:33.047 6.901 - 6.929: 99.4296% ( 1) 00:16:33.047 11.743 - 11.798: 99.4357% ( 1) 00:16:33.047 12.410 - 12.466: 99.4418% ( 1) 00:16:33.047 186.101 - 186.991: 99.4480% ( 1) 00:16:33.047 3761.197 - 3789.690: 99.4541% ( 1) 00:16:33.047 3846.678 - 3875.172: 99.4602% ( 1) 00:16:33.047 3989.148 - 4017.642: 100.0000% ( 88) 00:16:33.047 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:33.047 [ 00:16:33.047 { 00:16:33.047 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:33.047 "subtype": "Discovery", 00:16:33.047 "listen_addresses": [], 00:16:33.047 "allow_any_host": true, 00:16:33.047 "hosts": [] 00:16:33.047 }, 00:16:33.047 { 00:16:33.047 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:33.047 "subtype": "NVMe", 00:16:33.047 "listen_addresses": [ 00:16:33.047 { 00:16:33.047 "trtype": "VFIOUSER", 00:16:33.047 "adrfam": "IPv4", 00:16:33.047 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:33.047 "trsvcid": "0" 00:16:33.047 } 00:16:33.047 ], 00:16:33.047 "allow_any_host": true, 00:16:33.047 "hosts": [], 00:16:33.047 "serial_number": "SPDK1", 00:16:33.047 "model_number": "SPDK bdev Controller", 00:16:33.047 "max_namespaces": 32, 00:16:33.047 "min_cntlid": 1, 00:16:33.047 "max_cntlid": 65519, 00:16:33.047 "namespaces": [ 00:16:33.047 { 00:16:33.047 "nsid": 1, 00:16:33.047 "bdev_name": "Malloc1", 00:16:33.047 "name": "Malloc1", 00:16:33.047 "nguid": "4E9196D1CB93437E9D7A6439F8BA57E0", 00:16:33.047 "uuid": "4e9196d1-cb93-437e-9d7a-6439f8ba57e0" 00:16:33.047 }, 00:16:33.047 { 00:16:33.047 "nsid": 2, 00:16:33.047 "bdev_name": "Malloc3", 00:16:33.047 "name": "Malloc3", 00:16:33.047 "nguid": "B85854CEEF404ABCA4D3977A3483A58D", 00:16:33.047 "uuid": "b85854ce-ef40-4abc-a4d3-977a3483a58d" 00:16:33.047 } 00:16:33.047 ] 00:16:33.047 }, 00:16:33.047 { 00:16:33.047 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:33.047 "subtype": "NVMe", 00:16:33.047 "listen_addresses": [ 00:16:33.047 { 00:16:33.047 "trtype": "VFIOUSER", 00:16:33.047 "adrfam": "IPv4", 00:16:33.047 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:33.047 "trsvcid": "0" 00:16:33.047 } 00:16:33.047 ], 00:16:33.047 "allow_any_host": true, 00:16:33.047 "hosts": [], 00:16:33.047 "serial_number": "SPDK2", 00:16:33.047 "model_number": "SPDK bdev Controller", 00:16:33.047 "max_namespaces": 32, 00:16:33.047 "min_cntlid": 1, 00:16:33.047 "max_cntlid": 65519, 00:16:33.047 "namespaces": [ 00:16:33.047 { 00:16:33.047 "nsid": 1, 00:16:33.047 "bdev_name": "Malloc2", 00:16:33.047 "name": "Malloc2", 00:16:33.047 "nguid": "636DDF0D1AA64B148B79CA04F621E08D", 00:16:33.047 "uuid": "636ddf0d-1aa6-4b14-8b79-ca04f621e08d" 00:16:33.047 } 00:16:33.047 ] 00:16:33.047 } 00:16:33.047 ] 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1091209 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:33.047 12:06:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:33.047 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.047 [2024-07-15 12:06:22.990637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:33.048 Malloc4 00:16:33.306 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:33.306 [2024-07-15 12:06:23.209347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:33.306 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:33.306 Asynchronous Event Request test 00:16:33.306 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.306 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.306 Registering asynchronous event callbacks... 00:16:33.306 Starting namespace attribute notice tests for all controllers... 00:16:33.306 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:33.306 aer_cb - Changed Namespace 00:16:33.306 Cleaning up... 00:16:33.565 [ 00:16:33.565 { 00:16:33.565 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:33.565 "subtype": "Discovery", 00:16:33.565 "listen_addresses": [], 00:16:33.565 "allow_any_host": true, 00:16:33.565 "hosts": [] 00:16:33.565 }, 00:16:33.565 { 00:16:33.565 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:33.565 "subtype": "NVMe", 00:16:33.565 "listen_addresses": [ 00:16:33.565 { 00:16:33.565 "trtype": "VFIOUSER", 00:16:33.565 "adrfam": "IPv4", 00:16:33.565 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:33.565 "trsvcid": "0" 00:16:33.565 } 00:16:33.565 ], 00:16:33.565 "allow_any_host": true, 00:16:33.565 "hosts": [], 00:16:33.565 "serial_number": "SPDK1", 00:16:33.565 "model_number": "SPDK bdev Controller", 00:16:33.565 "max_namespaces": 32, 00:16:33.565 "min_cntlid": 1, 00:16:33.565 "max_cntlid": 65519, 00:16:33.565 "namespaces": [ 00:16:33.565 { 00:16:33.565 "nsid": 1, 00:16:33.565 "bdev_name": "Malloc1", 00:16:33.565 "name": "Malloc1", 00:16:33.565 "nguid": "4E9196D1CB93437E9D7A6439F8BA57E0", 00:16:33.565 "uuid": "4e9196d1-cb93-437e-9d7a-6439f8ba57e0" 00:16:33.565 }, 00:16:33.565 { 00:16:33.565 "nsid": 2, 00:16:33.565 "bdev_name": "Malloc3", 00:16:33.565 "name": "Malloc3", 00:16:33.565 "nguid": "B85854CEEF404ABCA4D3977A3483A58D", 00:16:33.565 "uuid": "b85854ce-ef40-4abc-a4d3-977a3483a58d" 00:16:33.565 } 00:16:33.565 ] 00:16:33.565 }, 00:16:33.565 { 00:16:33.565 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:33.565 "subtype": "NVMe", 00:16:33.565 "listen_addresses": [ 00:16:33.565 { 00:16:33.565 "trtype": "VFIOUSER", 00:16:33.565 "adrfam": "IPv4", 00:16:33.565 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:33.565 "trsvcid": "0" 00:16:33.565 } 00:16:33.565 ], 00:16:33.565 "allow_any_host": true, 00:16:33.565 "hosts": [], 00:16:33.565 "serial_number": "SPDK2", 00:16:33.565 "model_number": "SPDK bdev Controller", 00:16:33.565 "max_namespaces": 32, 00:16:33.565 "min_cntlid": 1, 00:16:33.565 "max_cntlid": 65519, 00:16:33.565 "namespaces": [ 00:16:33.565 { 00:16:33.565 "nsid": 1, 00:16:33.565 "bdev_name": "Malloc2", 00:16:33.565 "name": "Malloc2", 00:16:33.565 "nguid": "636DDF0D1AA64B148B79CA04F621E08D", 00:16:33.565 "uuid": "636ddf0d-1aa6-4b14-8b79-ca04f621e08d" 00:16:33.565 }, 00:16:33.565 { 00:16:33.565 "nsid": 2, 00:16:33.565 "bdev_name": "Malloc4", 00:16:33.565 "name": "Malloc4", 00:16:33.565 "nguid": "2971FBC0C39B464E95EFA9CE8B16EECF", 00:16:33.565 "uuid": "2971fbc0-c39b-464e-95ef-a9ce8b16eecf" 00:16:33.565 } 00:16:33.565 ] 00:16:33.565 } 00:16:33.565 ] 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1091209 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1083071 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1083071 ']' 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1083071 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1083071 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1083071' 00:16:33.565 killing process with pid 1083071 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1083071 00:16:33.565 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1083071 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1091301 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1091301' 00:16:33.823 Process pid: 1091301 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1091301 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1091301 ']' 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.823 12:06:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:33.823 [2024-07-15 12:06:23.761045] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:33.823 [2024-07-15 12:06:23.761932] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:16:33.823 [2024-07-15 12:06:23.761970] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.823 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.082 [2024-07-15 12:06:23.831919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.082 [2024-07-15 12:06:23.873154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.082 [2024-07-15 12:06:23.873194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.082 [2024-07-15 12:06:23.873201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.082 [2024-07-15 12:06:23.873207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.082 [2024-07-15 12:06:23.873212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.082 [2024-07-15 12:06:23.873270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.082 [2024-07-15 12:06:23.873326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.082 [2024-07-15 12:06:23.873452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.082 [2024-07-15 12:06:23.873454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.082 [2024-07-15 12:06:23.950405] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:34.082 [2024-07-15 12:06:23.950780] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:34.082 [2024-07-15 12:06:23.950782] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:34.082 [2024-07-15 12:06:23.950870] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:34.082 [2024-07-15 12:06:23.951136] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:34.647 12:06:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.647 12:06:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:16:34.647 12:06:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:35.578 12:06:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:35.836 12:06:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:35.836 12:06:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:35.836 12:06:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:35.836 12:06:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:35.836 12:06:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:36.094 Malloc1 00:16:36.094 12:06:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:36.352 12:06:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:36.352 12:06:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:36.609 12:06:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:36.609 12:06:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:36.609 12:06:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:36.867 Malloc2 00:16:36.867 12:06:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:37.125 12:06:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:37.125 12:06:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1091301 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1091301 ']' 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1091301 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1091301 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1091301' 00:16:37.383 killing process with pid 1091301 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1091301 00:16:37.383 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1091301 00:16:37.642 12:06:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:37.642 12:06:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:37.642 00:16:37.642 real 0m50.745s 00:16:37.642 user 3m20.730s 00:16:37.642 sys 0m3.545s 00:16:37.642 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:37.642 12:06:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:37.642 ************************************ 00:16:37.642 END TEST nvmf_vfio_user 00:16:37.642 ************************************ 00:16:37.642 12:06:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:37.642 12:06:27 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:37.642 12:06:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:37.642 12:06:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.642 12:06:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:37.642 ************************************ 00:16:37.642 START TEST nvmf_vfio_user_nvme_compliance 00:16:37.642 ************************************ 00:16:37.642 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:37.902 * Looking for test storage... 00:16:37.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1091989 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1091989' 00:16:37.902 Process pid: 1091989 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1091989 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1091989 ']' 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.902 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:37.902 [2024-07-15 12:06:27.778134] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:16:37.902 [2024-07-15 12:06:27.778180] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.902 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.902 [2024-07-15 12:06:27.848075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.902 [2024-07-15 12:06:27.887789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.902 [2024-07-15 12:06:27.887831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.902 [2024-07-15 12:06:27.887839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.902 [2024-07-15 12:06:27.887845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.902 [2024-07-15 12:06:27.887851] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.902 [2024-07-15 12:06:27.887912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.902 [2024-07-15 12:06:27.888018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.902 [2024-07-15 12:06:27.888019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.161 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.161 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:38.161 12:06:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:39.100 12:06:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:39.100 12:06:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:39.100 12:06:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:39.100 12:06:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.100 12:06:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:39.100 12:06:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.100 12:06:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:39.100 malloc0 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.100 12:06:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:39.359 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.359 00:16:39.359 00:16:39.359 CUnit - A unit testing framework for C - Version 2.1-3 00:16:39.359 http://cunit.sourceforge.net/ 00:16:39.359 00:16:39.359 00:16:39.359 Suite: nvme_compliance 00:16:39.359 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 12:06:29.201121] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.359 [2024-07-15 12:06:29.202483] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:39.359 [2024-07-15 12:06:29.202499] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:39.359 [2024-07-15 12:06:29.202506] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:39.359 [2024-07-15 12:06:29.204137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.359 passed 00:16:39.359 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 12:06:29.282686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.359 [2024-07-15 12:06:29.285702] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.359 passed 00:16:39.618 Test: admin_identify_ns ...[2024-07-15 12:06:29.368606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.618 [2024-07-15 12:06:29.429241] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:39.618 [2024-07-15 12:06:29.436241] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:39.618 [2024-07-15 12:06:29.457334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.618 passed 00:16:39.618 Test: admin_get_features_mandatory_features ...[2024-07-15 12:06:29.533500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.618 [2024-07-15 12:06:29.536523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.618 passed 00:16:39.618 Test: admin_get_features_optional_features ...[2024-07-15 12:06:29.610028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.618 [2024-07-15 12:06:29.613048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.877 passed 00:16:39.877 Test: admin_set_features_number_of_queues ...[2024-07-15 12:06:29.690668] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.877 [2024-07-15 12:06:29.795450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:39.877 passed 00:16:39.877 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 12:06:29.875320] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:39.877 [2024-07-15 12:06:29.878338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.136 passed 00:16:40.136 Test: admin_get_log_page_with_lpo ...[2024-07-15 12:06:29.955752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.136 [2024-07-15 12:06:30.022236] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:40.136 [2024-07-15 12:06:30.035293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.136 passed 00:16:40.136 Test: fabric_property_get ...[2024-07-15 12:06:30.109684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.136 [2024-07-15 12:06:30.110918] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:40.136 [2024-07-15 12:06:30.113711] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.395 passed 00:16:40.395 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 12:06:30.194241] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.395 [2024-07-15 12:06:30.195480] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:40.395 [2024-07-15 12:06:30.197263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.395 passed 00:16:40.395 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 12:06:30.276781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.395 [2024-07-15 12:06:30.361238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:40.395 [2024-07-15 12:06:30.377233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:40.395 [2024-07-15 12:06:30.382387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.656 passed 00:16:40.656 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 12:06:30.458716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.656 [2024-07-15 12:06:30.459951] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:40.656 [2024-07-15 12:06:30.461735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.656 passed 00:16:40.656 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 12:06:30.539765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.656 [2024-07-15 12:06:30.616239] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:40.656 [2024-07-15 12:06:30.640237] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:40.656 [2024-07-15 12:06:30.645321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.914 passed 00:16:40.914 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 12:06:30.723465] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.915 [2024-07-15 12:06:30.724711] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:40.915 [2024-07-15 12:06:30.724734] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:40.915 [2024-07-15 12:06:30.726495] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.915 passed 00:16:40.915 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 12:06:30.804494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.915 [2024-07-15 12:06:30.900239] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:40.915 [2024-07-15 12:06:30.904252] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:40.915 [2024-07-15 12:06:30.912236] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:41.174 [2024-07-15 12:06:30.920231] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:41.174 [2024-07-15 12:06:30.949341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.174 passed 00:16:41.174 Test: admin_create_io_sq_verify_pc ...[2024-07-15 12:06:31.025325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.174 [2024-07-15 12:06:31.043239] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:41.174 [2024-07-15 12:06:31.060514] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.174 passed 00:16:41.174 Test: admin_create_io_qp_max_qps ...[2024-07-15 12:06:31.139054] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.551 [2024-07-15 12:06:32.231237] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:42.809 [2024-07-15 12:06:32.608843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.809 passed 00:16:42.809 Test: admin_create_io_sq_shared_cq ...[2024-07-15 12:06:32.684700] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.069 [2024-07-15 12:06:32.820234] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:43.069 [2024-07-15 12:06:32.857287] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:43.069 passed 00:16:43.069 00:16:43.069 Run Summary: Type Total Ran Passed Failed Inactive 00:16:43.069 suites 1 1 n/a 0 0 00:16:43.069 tests 18 18 18 0 0 00:16:43.069 asserts 360 360 360 0 n/a 00:16:43.069 00:16:43.069 Elapsed time = 1.503 seconds 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1091989 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1091989 ']' 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1091989 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1091989 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1091989' 00:16:43.069 killing process with pid 1091989 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1091989 00:16:43.069 12:06:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1091989 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:43.336 00:16:43.336 real 0m5.524s 00:16:43.336 user 0m15.632s 00:16:43.336 sys 0m0.440s 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:43.336 ************************************ 00:16:43.336 END TEST nvmf_vfio_user_nvme_compliance 00:16:43.336 ************************************ 00:16:43.336 12:06:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:43.336 12:06:33 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:43.336 12:06:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:43.336 12:06:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:43.336 12:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:43.336 ************************************ 00:16:43.336 START TEST nvmf_vfio_user_fuzz 00:16:43.336 ************************************ 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:43.336 * Looking for test storage... 00:16:43.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.336 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1092963 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1092963' 00:16:43.337 Process pid: 1092963 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1092963 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1092963 ']' 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.337 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.598 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.598 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.598 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:43.598 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.598 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:43.598 12:06:33 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:44.976 malloc0 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:44.976 12:06:34 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:17.090 Fuzzing completed. Shutting down the fuzz application 00:17:17.090 00:17:17.090 Dumping successful admin opcodes: 00:17:17.090 8, 9, 10, 24, 00:17:17.090 Dumping successful io opcodes: 00:17:17.090 0, 00:17:17.090 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1009616, total successful commands: 3958, random_seed: 3340547648 00:17:17.090 NS: 0x200003a1ef00 admin qp, Total commands completed: 250075, total successful commands: 2021, random_seed: 1633050688 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1092963 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1092963 ']' 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1092963 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1092963 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1092963' 00:17:17.090 killing process with pid 1092963 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1092963 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1092963 00:17:17.090 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:17.091 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:17.091 00:17:17.091 real 0m32.355s 00:17:17.091 user 0m30.681s 00:17:17.091 sys 0m30.495s 00:17:17.091 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.091 12:07:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:17.091 ************************************ 00:17:17.091 END TEST nvmf_vfio_user_fuzz 00:17:17.091 ************************************ 00:17:17.091 12:07:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:17.091 12:07:05 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:17.091 12:07:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:17.091 12:07:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.091 12:07:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:17.091 ************************************ 00:17:17.091 START TEST nvmf_host_management 00:17:17.091 ************************************ 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:17.091 * Looking for test storage... 00:17:17.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:17.091 12:07:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:21.286 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:21.287 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:21.287 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:21.287 Found net devices under 0000:86:00.0: cvl_0_0 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:21.287 Found net devices under 0000:86:00.1: cvl_0_1 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.287 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.546 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.546 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.546 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:21.546 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.546 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.546 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.546 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:21.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:17:21.547 00:17:21.547 --- 10.0.0.2 ping statistics --- 00:17:21.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.547 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:17:21.547 00:17:21.547 --- 10.0.0.1 ping statistics --- 00:17:21.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.547 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:21.547 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1101402 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1101402 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1101402 ']' 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.805 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:21.805 [2024-07-15 12:07:11.604841] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:17:21.805 [2024-07-15 12:07:11.604888] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.805 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.805 [2024-07-15 12:07:11.678334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.805 [2024-07-15 12:07:11.721454] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.805 [2024-07-15 12:07:11.721494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.805 [2024-07-15 12:07:11.721503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.805 [2024-07-15 12:07:11.721509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.805 [2024-07-15 12:07:11.721513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.805 [2024-07-15 12:07:11.721626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.805 [2024-07-15 12:07:11.721732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.805 [2024-07-15 12:07:11.721814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.805 [2024-07-15 12:07:11.721816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.065 [2024-07-15 12:07:11.855113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.065 Malloc0 00:17:22.065 [2024-07-15 12:07:11.915027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1101506 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1101506 /var/tmp/bdevperf.sock 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1101506 ']' 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:22.065 { 00:17:22.065 "params": { 00:17:22.065 "name": "Nvme$subsystem", 00:17:22.065 "trtype": "$TEST_TRANSPORT", 00:17:22.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:22.065 "adrfam": "ipv4", 00:17:22.065 "trsvcid": "$NVMF_PORT", 00:17:22.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:22.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:22.065 "hdgst": ${hdgst:-false}, 00:17:22.065 "ddgst": ${ddgst:-false} 00:17:22.065 }, 00:17:22.065 "method": "bdev_nvme_attach_controller" 00:17:22.065 } 00:17:22.065 EOF 00:17:22.065 )") 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:22.065 12:07:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:22.065 "params": { 00:17:22.065 "name": "Nvme0", 00:17:22.065 "trtype": "tcp", 00:17:22.065 "traddr": "10.0.0.2", 00:17:22.065 "adrfam": "ipv4", 00:17:22.065 "trsvcid": "4420", 00:17:22.065 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:22.065 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:22.065 "hdgst": false, 00:17:22.065 "ddgst": false 00:17:22.065 }, 00:17:22.065 "method": "bdev_nvme_attach_controller" 00:17:22.065 }' 00:17:22.065 [2024-07-15 12:07:12.004731] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:17:22.065 [2024-07-15 12:07:12.004777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101506 ] 00:17:22.065 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.324 [2024-07-15 12:07:12.073179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.324 [2024-07-15 12:07:12.113340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.324 Running I/O for 10 seconds... 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:17:22.583 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.844 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.844 [2024-07-15 12:07:12.705592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.844 [2024-07-15 12:07:12.705635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.844 [2024-07-15 12:07:12.705642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.844 [2024-07-15 12:07:12.705650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.705995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.706001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.706007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.706013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.706020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.706026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.706032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1745390 is same with the state(5) to be set 00:17:22.845 [2024-07-15 12:07:12.706143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.845 [2024-07-15 12:07:12.706411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.845 [2024-07-15 12:07:12.706420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.706984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.706993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.707004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.707013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.707024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.707034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.707043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.707051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.707060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.707068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.707079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.707086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.707096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.707104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.707114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.707122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.707130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.707137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.846 [2024-07-15 12:07:12.707146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.846 [2024-07-15 12:07:12.707153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.847 [2024-07-15 12:07:12.707162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.847 [2024-07-15 12:07:12.707170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.847 [2024-07-15 12:07:12.707178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.847 [2024-07-15 12:07:12.707185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.847 [2024-07-15 12:07:12.707194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.847 [2024-07-15 12:07:12.707202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.847 [2024-07-15 12:07:12.707211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.847 [2024-07-15 12:07:12.707220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.847 [2024-07-15 12:07:12.707235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.847 [2024-07-15 12:07:12.707243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.847 [2024-07-15 12:07:12.707252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.847 [2024-07-15 12:07:12.707260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.847 [2024-07-15 12:07:12.707269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:22.847 [2024-07-15 12:07:12.707276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:22.847 [2024-07-15 12:07:12.707285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15107f0 is same with the state(5) to be set 00:17:22.847 [2024-07-15 12:07:12.707337] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15107f0 was disconnected and freed. reset controller. 00:17:22.847 [2024-07-15 12:07:12.708280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:22.847 task offset: 90112 on job bdev=Nvme0n1 fails 00:17:22.847 00:17:22.847 Latency(us) 00:17:22.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.847 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:22.847 Job: Nvme0n1 ended in about 0.40 seconds with error 00:17:22.847 Verification LBA range: start 0x0 length 0x400 00:17:22.847 Nvme0n1 : 0.40 1767.15 110.45 160.65 0.00 32298.72 6582.09 28265.96 00:17:22.847 =================================================================================================================== 00:17:22.847 Total : 1767.15 110.45 160.65 0.00 32298.72 6582.09 28265.96 00:17:22.847 [2024-07-15 12:07:12.709915] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:22.847 [2024-07-15 12:07:12.709932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ff2d0 (9): Bad file descriptor 00:17:22.847 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.847 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:22.847 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.847 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:22.847 12:07:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.847 12:07:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:22.847 [2024-07-15 12:07:12.761658] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1101506 00:17:23.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1101506) - No such process 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:23.783 { 00:17:23.783 "params": { 00:17:23.783 "name": "Nvme$subsystem", 00:17:23.783 "trtype": "$TEST_TRANSPORT", 00:17:23.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:23.783 "adrfam": "ipv4", 00:17:23.783 "trsvcid": "$NVMF_PORT", 00:17:23.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:23.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:23.783 "hdgst": ${hdgst:-false}, 00:17:23.783 "ddgst": ${ddgst:-false} 00:17:23.783 }, 00:17:23.783 "method": "bdev_nvme_attach_controller" 00:17:23.783 } 00:17:23.783 EOF 00:17:23.783 )") 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:23.783 12:07:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:23.783 "params": { 00:17:23.783 "name": "Nvme0", 00:17:23.783 "trtype": "tcp", 00:17:23.783 "traddr": "10.0.0.2", 00:17:23.783 "adrfam": "ipv4", 00:17:23.783 "trsvcid": "4420", 00:17:23.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:23.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:23.783 "hdgst": false, 00:17:23.783 "ddgst": false 00:17:23.783 }, 00:17:23.783 "method": "bdev_nvme_attach_controller" 00:17:23.783 }' 00:17:23.783 [2024-07-15 12:07:13.771000] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:17:23.783 [2024-07-15 12:07:13.771050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101759 ] 00:17:24.042 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.042 [2024-07-15 12:07:13.840780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.042 [2024-07-15 12:07:13.878373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.042 Running I/O for 1 seconds... 00:17:25.422 00:17:25.422 Latency(us) 00:17:25.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.422 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:25.422 Verification LBA range: start 0x0 length 0x400 00:17:25.422 Nvme0n1 : 1.00 1911.70 119.48 0.00 0.00 32957.31 7750.34 27468.13 00:17:25.422 =================================================================================================================== 00:17:25.422 Total : 1911.70 119.48 0.00 0.00 32957.31 7750.34 27468.13 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:25.422 rmmod nvme_tcp 00:17:25.422 rmmod nvme_fabrics 00:17:25.422 rmmod nvme_keyring 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1101402 ']' 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1101402 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1101402 ']' 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1101402 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1101402 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1101402' 00:17:25.422 killing process with pid 1101402 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1101402 00:17:25.422 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1101402 00:17:25.682 [2024-07-15 12:07:15.506146] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:25.682 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:25.682 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:25.682 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:25.682 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.682 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.682 12:07:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.682 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.682 12:07:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.215 12:07:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:28.215 12:07:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:28.215 00:17:28.215 real 0m11.959s 00:17:28.215 user 0m18.832s 00:17:28.215 sys 0m5.363s 00:17:28.215 12:07:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:28.215 12:07:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:28.215 ************************************ 00:17:28.215 END TEST nvmf_host_management 00:17:28.215 ************************************ 00:17:28.215 12:07:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:28.215 12:07:17 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:28.215 12:07:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:28.215 12:07:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.215 12:07:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.215 ************************************ 00:17:28.215 START TEST nvmf_lvol 00:17:28.215 ************************************ 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:28.215 * Looking for test storage... 00:17:28.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.215 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.216 12:07:17 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:33.490 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:33.490 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:33.490 Found net devices under 0000:86:00.0: cvl_0_0 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:33.490 Found net devices under 0000:86:00.1: cvl_0_1 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:33.490 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:33.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:17:33.749 00:17:33.749 --- 10.0.0.2 ping statistics --- 00:17:33.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.749 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:17:33.749 00:17:33.749 --- 10.0.0.1 ping statistics --- 00:17:33.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.749 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1105514 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1105514 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1105514 ']' 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.749 12:07:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:33.749 [2024-07-15 12:07:23.645660] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:17:33.749 [2024-07-15 12:07:23.645708] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.749 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.749 [2024-07-15 12:07:23.719244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:34.008 [2024-07-15 12:07:23.760612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.008 [2024-07-15 12:07:23.760651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.008 [2024-07-15 12:07:23.760659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.008 [2024-07-15 12:07:23.760665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.008 [2024-07-15 12:07:23.760670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.008 [2024-07-15 12:07:23.760726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.008 [2024-07-15 12:07:23.760832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.008 [2024-07-15 12:07:23.760833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.586 12:07:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.586 12:07:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:17:34.586 12:07:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.586 12:07:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.586 12:07:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:34.586 12:07:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.586 12:07:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:34.901 [2024-07-15 12:07:24.637396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.901 12:07:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:34.901 12:07:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:34.901 12:07:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:35.159 12:07:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:35.159 12:07:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:35.418 12:07:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:35.676 12:07:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=91376651-3674-4305-98a2-c4df72d46b72 00:17:35.676 12:07:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91376651-3674-4305-98a2-c4df72d46b72 lvol 20 00:17:35.676 12:07:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=65512adc-bce7-4a56-9867-fa32011ee4aa 00:17:35.676 12:07:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:35.934 12:07:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 65512adc-bce7-4a56-9867-fa32011ee4aa 00:17:36.193 12:07:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:36.193 [2024-07-15 12:07:26.160834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.193 12:07:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:36.452 12:07:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1106012 00:17:36.452 12:07:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:36.452 12:07:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:36.452 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.389 12:07:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 65512adc-bce7-4a56-9867-fa32011ee4aa MY_SNAPSHOT 00:17:37.648 12:07:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a065a99b-0ee1-4bee-ab1c-cbba831ea303 00:17:37.648 12:07:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 65512adc-bce7-4a56-9867-fa32011ee4aa 30 00:17:37.907 12:07:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a065a99b-0ee1-4bee-ab1c-cbba831ea303 MY_CLONE 00:17:38.166 12:07:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a1df48f8-adbb-4369-af41-fafaffbab025 00:17:38.166 12:07:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a1df48f8-adbb-4369-af41-fafaffbab025 00:17:38.733 12:07:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1106012 00:17:46.854 Initializing NVMe Controllers 00:17:46.854 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:46.854 Controller IO queue size 128, less than required. 00:17:46.854 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:46.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:46.854 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:46.854 Initialization complete. Launching workers. 00:17:46.854 ======================================================== 00:17:46.854 Latency(us) 00:17:46.854 Device Information : IOPS MiB/s Average min max 00:17:46.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12520.90 48.91 10226.52 1503.00 64276.03 00:17:46.854 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12398.20 48.43 10324.43 3443.88 53935.54 00:17:46.854 ======================================================== 00:17:46.854 Total : 24919.10 97.34 10275.23 1503.00 64276.03 00:17:46.854 00:17:46.854 12:07:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:47.113 12:07:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 65512adc-bce7-4a56-9867-fa32011ee4aa 00:17:47.113 12:07:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91376651-3674-4305-98a2-c4df72d46b72 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:47.373 rmmod nvme_tcp 00:17:47.373 rmmod nvme_fabrics 00:17:47.373 rmmod nvme_keyring 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1105514 ']' 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1105514 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1105514 ']' 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1105514 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.373 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1105514 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1105514' 00:17:47.632 killing process with pid 1105514 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1105514 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1105514 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.632 12:07:37 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:50.170 00:17:50.170 real 0m22.022s 00:17:50.170 user 1m4.151s 00:17:50.170 sys 0m7.107s 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:50.170 ************************************ 00:17:50.170 END TEST nvmf_lvol 00:17:50.170 ************************************ 00:17:50.170 12:07:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:50.170 12:07:39 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:50.170 12:07:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:50.170 12:07:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.170 12:07:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:50.170 ************************************ 00:17:50.170 START TEST nvmf_lvs_grow 00:17:50.170 ************************************ 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:50.170 * Looking for test storage... 00:17:50.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:50.170 12:07:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:55.445 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:55.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:55.445 Found net devices under 0000:86:00.0: cvl_0_0 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:55.445 Found net devices under 0000:86:00.1: cvl_0_1 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.445 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:55.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:17:55.704 00:17:55.704 --- 10.0.0.2 ping statistics --- 00:17:55.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.704 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:17:55.704 00:17:55.704 --- 10.0.0.1 ping statistics --- 00:17:55.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.704 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1111355 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1111355 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1111355 ']' 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.704 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:55.704 [2024-07-15 12:07:45.699060] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:17:55.704 [2024-07-15 12:07:45.699101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.962 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.962 [2024-07-15 12:07:45.768411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.962 [2024-07-15 12:07:45.808169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.962 [2024-07-15 12:07:45.808208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.962 [2024-07-15 12:07:45.808215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.962 [2024-07-15 12:07:45.808228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.962 [2024-07-15 12:07:45.808233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.962 [2024-07-15 12:07:45.808252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.962 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.962 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:55.962 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:55.962 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:55.962 12:07:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:55.962 12:07:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.962 12:07:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:56.221 [2024-07-15 12:07:46.088240] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:56.221 ************************************ 00:17:56.221 START TEST lvs_grow_clean 00:17:56.221 ************************************ 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:56.221 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:56.479 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:56.479 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:56.737 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5cb46a58-283f-4fa3-a8b5-5892627a093d 00:17:56.737 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:17:56.737 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:56.737 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:56.737 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:56.737 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5cb46a58-283f-4fa3-a8b5-5892627a093d lvol 150 00:17:56.995 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7f23d0ef-dd62-40ef-8d31-4534e937b334 00:17:56.995 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:56.995 12:07:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:57.253 [2024-07-15 12:07:47.036950] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:57.253 [2024-07-15 12:07:47.036998] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:57.253 true 00:17:57.253 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:17:57.253 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:57.253 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:57.253 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:57.511 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f23d0ef-dd62-40ef-8d31-4534e937b334 00:17:57.770 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:57.770 [2024-07-15 12:07:47.702960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.770 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1111643 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1111643 /var/tmp/bdevperf.sock 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1111643 ']' 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.030 12:07:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:58.030 [2024-07-15 12:07:47.932718] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:17:58.030 [2024-07-15 12:07:47.932767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1111643 ] 00:17:58.030 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.030 [2024-07-15 12:07:47.999151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.351 [2024-07-15 12:07:48.039302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.351 12:07:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.351 12:07:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:58.351 12:07:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:58.609 Nvme0n1 00:17:58.609 12:07:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:58.867 [ 00:17:58.867 { 00:17:58.867 "name": "Nvme0n1", 00:17:58.867 "aliases": [ 00:17:58.867 "7f23d0ef-dd62-40ef-8d31-4534e937b334" 00:17:58.867 ], 00:17:58.867 "product_name": "NVMe disk", 00:17:58.867 "block_size": 4096, 00:17:58.867 "num_blocks": 38912, 00:17:58.867 "uuid": "7f23d0ef-dd62-40ef-8d31-4534e937b334", 00:17:58.867 "assigned_rate_limits": { 00:17:58.867 "rw_ios_per_sec": 0, 00:17:58.867 "rw_mbytes_per_sec": 0, 00:17:58.867 "r_mbytes_per_sec": 0, 00:17:58.867 "w_mbytes_per_sec": 0 00:17:58.867 }, 00:17:58.867 "claimed": false, 00:17:58.867 "zoned": false, 00:17:58.867 "supported_io_types": { 00:17:58.867 "read": true, 00:17:58.867 "write": true, 00:17:58.867 "unmap": true, 00:17:58.867 "flush": true, 00:17:58.867 "reset": true, 00:17:58.867 "nvme_admin": true, 00:17:58.867 "nvme_io": true, 00:17:58.867 "nvme_io_md": false, 00:17:58.867 "write_zeroes": true, 00:17:58.868 "zcopy": false, 00:17:58.868 "get_zone_info": false, 00:17:58.868 "zone_management": false, 00:17:58.868 "zone_append": false, 00:17:58.868 "compare": true, 00:17:58.868 "compare_and_write": true, 00:17:58.868 "abort": true, 00:17:58.868 "seek_hole": false, 00:17:58.868 "seek_data": false, 00:17:58.868 "copy": true, 00:17:58.868 "nvme_iov_md": false 00:17:58.868 }, 00:17:58.868 "memory_domains": [ 00:17:58.868 { 00:17:58.868 "dma_device_id": "system", 00:17:58.868 "dma_device_type": 1 00:17:58.868 } 00:17:58.868 ], 00:17:58.868 "driver_specific": { 00:17:58.868 "nvme": [ 00:17:58.868 { 00:17:58.868 "trid": { 00:17:58.868 "trtype": "TCP", 00:17:58.868 "adrfam": "IPv4", 00:17:58.868 "traddr": "10.0.0.2", 00:17:58.868 "trsvcid": "4420", 00:17:58.868 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:58.868 }, 00:17:58.868 "ctrlr_data": { 00:17:58.868 "cntlid": 1, 00:17:58.868 "vendor_id": "0x8086", 00:17:58.868 "model_number": "SPDK bdev Controller", 00:17:58.868 "serial_number": "SPDK0", 00:17:58.868 "firmware_revision": "24.09", 00:17:58.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:58.868 "oacs": { 00:17:58.868 "security": 0, 00:17:58.868 "format": 0, 00:17:58.868 "firmware": 0, 00:17:58.868 "ns_manage": 0 00:17:58.868 }, 00:17:58.868 "multi_ctrlr": true, 00:17:58.868 "ana_reporting": false 00:17:58.868 }, 00:17:58.868 "vs": { 00:17:58.868 "nvme_version": "1.3" 00:17:58.868 }, 00:17:58.868 "ns_data": { 00:17:58.868 "id": 1, 00:17:58.868 "can_share": true 00:17:58.868 } 00:17:58.868 } 00:17:58.868 ], 00:17:58.868 "mp_policy": "active_passive" 00:17:58.868 } 00:17:58.868 } 00:17:58.868 ] 00:17:58.868 12:07:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1111864 00:17:58.868 12:07:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:58.868 12:07:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.868 Running I/O for 10 seconds... 00:17:59.804 Latency(us) 00:17:59.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.804 Nvme0n1 : 1.00 22109.00 86.36 0.00 0.00 0.00 0.00 0.00 00:17:59.804 =================================================================================================================== 00:17:59.804 Total : 22109.00 86.36 0.00 0.00 0.00 0.00 0.00 00:17:59.804 00:18:00.747 12:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:18:01.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.005 Nvme0n1 : 2.00 22066.50 86.20 0.00 0.00 0.00 0.00 0.00 00:18:01.005 =================================================================================================================== 00:18:01.005 Total : 22066.50 86.20 0.00 0.00 0.00 0.00 0.00 00:18:01.005 00:18:01.005 true 00:18:01.005 12:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:18:01.005 12:07:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:01.264 12:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:01.264 12:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:01.264 12:07:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1111864 00:18:01.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:01.830 Nvme0n1 : 3.00 22153.67 86.54 0.00 0.00 0.00 0.00 0.00 00:18:01.830 =================================================================================================================== 00:18:01.830 Total : 22153.67 86.54 0.00 0.00 0.00 0.00 0.00 00:18:01.830 00:18:03.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:03.205 Nvme0n1 : 4.00 22255.25 86.93 0.00 0.00 0.00 0.00 0.00 00:18:03.205 =================================================================================================================== 00:18:03.205 Total : 22255.25 86.93 0.00 0.00 0.00 0.00 0.00 00:18:03.205 00:18:04.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:04.140 Nvme0n1 : 5.00 22325.80 87.21 0.00 0.00 0.00 0.00 0.00 00:18:04.140 =================================================================================================================== 00:18:04.140 Total : 22325.80 87.21 0.00 0.00 0.00 0.00 0.00 00:18:04.140 00:18:05.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:05.076 Nvme0n1 : 6.00 22378.17 87.41 0.00 0.00 0.00 0.00 0.00 00:18:05.076 =================================================================================================================== 00:18:05.076 Total : 22378.17 87.41 0.00 0.00 0.00 0.00 0.00 00:18:05.076 00:18:06.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.012 Nvme0n1 : 7.00 22419.00 87.57 0.00 0.00 0.00 0.00 0.00 00:18:06.012 =================================================================================================================== 00:18:06.012 Total : 22419.00 87.57 0.00 0.00 0.00 0.00 0.00 00:18:06.012 00:18:06.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.947 Nvme0n1 : 8.00 22452.62 87.71 0.00 0.00 0.00 0.00 0.00 00:18:06.947 =================================================================================================================== 00:18:06.947 Total : 22452.62 87.71 0.00 0.00 0.00 0.00 0.00 00:18:06.947 00:18:07.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.883 Nvme0n1 : 9.00 22482.33 87.82 0.00 0.00 0.00 0.00 0.00 00:18:07.883 =================================================================================================================== 00:18:07.883 Total : 22482.33 87.82 0.00 0.00 0.00 0.00 0.00 00:18:07.883 00:18:09.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.261 Nvme0n1 : 10.00 22502.10 87.90 0.00 0.00 0.00 0.00 0.00 00:18:09.261 =================================================================================================================== 00:18:09.261 Total : 22502.10 87.90 0.00 0.00 0.00 0.00 0.00 00:18:09.261 00:18:09.261 00:18:09.261 Latency(us) 00:18:09.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.261 Nvme0n1 : 10.01 22501.47 87.90 0.00 0.00 5684.46 4359.57 14702.86 00:18:09.261 =================================================================================================================== 00:18:09.261 Total : 22501.47 87.90 0.00 0.00 5684.46 4359.57 14702.86 00:18:09.261 0 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1111643 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1111643 ']' 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1111643 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1111643 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1111643' 00:18:09.261 killing process with pid 1111643 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1111643 00:18:09.261 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.261 00:18:09.261 Latency(us) 00:18:09.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.261 =================================================================================================================== 00:18:09.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.261 12:07:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1111643 00:18:09.261 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:09.261 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:09.520 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:18:09.520 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:09.778 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:09.778 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:18:09.778 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:10.038 [2024-07-15 12:07:59.808838] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:10.038 12:07:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:18:10.038 request: 00:18:10.038 { 00:18:10.038 "uuid": "5cb46a58-283f-4fa3-a8b5-5892627a093d", 00:18:10.038 "method": "bdev_lvol_get_lvstores", 00:18:10.038 "req_id": 1 00:18:10.038 } 00:18:10.038 Got JSON-RPC error response 00:18:10.038 response: 00:18:10.038 { 00:18:10.038 "code": -19, 00:18:10.038 "message": "No such device" 00:18:10.038 } 00:18:10.038 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:18:10.038 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:10.038 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:10.038 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:10.038 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:10.297 aio_bdev 00:18:10.297 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7f23d0ef-dd62-40ef-8d31-4534e937b334 00:18:10.297 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=7f23d0ef-dd62-40ef-8d31-4534e937b334 00:18:10.297 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:10.297 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:18:10.297 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:10.297 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:10.297 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:10.556 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7f23d0ef-dd62-40ef-8d31-4534e937b334 -t 2000 00:18:10.556 [ 00:18:10.556 { 00:18:10.556 "name": "7f23d0ef-dd62-40ef-8d31-4534e937b334", 00:18:10.556 "aliases": [ 00:18:10.556 "lvs/lvol" 00:18:10.556 ], 00:18:10.556 "product_name": "Logical Volume", 00:18:10.556 "block_size": 4096, 00:18:10.556 "num_blocks": 38912, 00:18:10.556 "uuid": "7f23d0ef-dd62-40ef-8d31-4534e937b334", 00:18:10.556 "assigned_rate_limits": { 00:18:10.556 "rw_ios_per_sec": 0, 00:18:10.556 "rw_mbytes_per_sec": 0, 00:18:10.556 "r_mbytes_per_sec": 0, 00:18:10.556 "w_mbytes_per_sec": 0 00:18:10.556 }, 00:18:10.556 "claimed": false, 00:18:10.556 "zoned": false, 00:18:10.556 "supported_io_types": { 00:18:10.556 "read": true, 00:18:10.556 "write": true, 00:18:10.556 "unmap": true, 00:18:10.556 "flush": false, 00:18:10.556 "reset": true, 00:18:10.556 "nvme_admin": false, 00:18:10.556 "nvme_io": false, 00:18:10.556 "nvme_io_md": false, 00:18:10.556 "write_zeroes": true, 00:18:10.556 "zcopy": false, 00:18:10.556 "get_zone_info": false, 00:18:10.556 "zone_management": false, 00:18:10.556 "zone_append": false, 00:18:10.556 "compare": false, 00:18:10.556 "compare_and_write": false, 00:18:10.556 "abort": false, 00:18:10.556 "seek_hole": true, 00:18:10.556 "seek_data": true, 00:18:10.556 "copy": false, 00:18:10.556 "nvme_iov_md": false 00:18:10.556 }, 00:18:10.556 "driver_specific": { 00:18:10.556 "lvol": { 00:18:10.556 "lvol_store_uuid": "5cb46a58-283f-4fa3-a8b5-5892627a093d", 00:18:10.556 "base_bdev": "aio_bdev", 00:18:10.556 "thin_provision": false, 00:18:10.556 "num_allocated_clusters": 38, 00:18:10.556 "snapshot": false, 00:18:10.556 "clone": false, 00:18:10.556 "esnap_clone": false 00:18:10.556 } 00:18:10.556 } 00:18:10.556 } 00:18:10.556 ] 00:18:10.556 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:18:10.556 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:18:10.556 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:10.815 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:10.815 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:18:10.815 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:11.074 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:11.074 12:08:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7f23d0ef-dd62-40ef-8d31-4534e937b334 00:18:11.074 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5cb46a58-283f-4fa3-a8b5-5892627a093d 00:18:11.333 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:11.591 00:18:11.591 real 0m15.309s 00:18:11.591 user 0m14.821s 00:18:11.591 sys 0m1.449s 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:11.591 ************************************ 00:18:11.591 END TEST lvs_grow_clean 00:18:11.591 ************************************ 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:11.591 ************************************ 00:18:11.591 START TEST lvs_grow_dirty 00:18:11.591 ************************************ 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:11.591 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:11.850 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:11.850 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:12.108 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:12.108 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:12.108 12:08:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:12.108 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:12.108 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:12.108 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 lvol 150 00:18:12.367 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f4856f31-f123-4681-9a8b-669199258ca7 00:18:12.367 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:12.367 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:12.626 [2024-07-15 12:08:02.408959] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:12.626 [2024-07-15 12:08:02.409011] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:12.626 true 00:18:12.626 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:12.626 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:12.626 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:12.626 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:12.885 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f4856f31-f123-4681-9a8b-669199258ca7 00:18:13.144 12:08:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:13.144 [2024-07-15 12:08:03.086959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.144 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1114230 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1114230 /var/tmp/bdevperf.sock 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1114230 ']' 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.403 12:08:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:13.403 [2024-07-15 12:08:03.318052] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:18:13.403 [2024-07-15 12:08:03.318098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114230 ] 00:18:13.403 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.403 [2024-07-15 12:08:03.386131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.662 [2024-07-15 12:08:03.426944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.280 12:08:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.280 12:08:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:14.280 12:08:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:14.540 Nvme0n1 00:18:14.540 12:08:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:14.797 [ 00:18:14.797 { 00:18:14.797 "name": "Nvme0n1", 00:18:14.797 "aliases": [ 00:18:14.797 "f4856f31-f123-4681-9a8b-669199258ca7" 00:18:14.797 ], 00:18:14.797 "product_name": "NVMe disk", 00:18:14.797 "block_size": 4096, 00:18:14.797 "num_blocks": 38912, 00:18:14.797 "uuid": "f4856f31-f123-4681-9a8b-669199258ca7", 00:18:14.797 "assigned_rate_limits": { 00:18:14.797 "rw_ios_per_sec": 0, 00:18:14.797 "rw_mbytes_per_sec": 0, 00:18:14.797 "r_mbytes_per_sec": 0, 00:18:14.797 "w_mbytes_per_sec": 0 00:18:14.797 }, 00:18:14.797 "claimed": false, 00:18:14.797 "zoned": false, 00:18:14.797 "supported_io_types": { 00:18:14.797 "read": true, 00:18:14.797 "write": true, 00:18:14.797 "unmap": true, 00:18:14.797 "flush": true, 00:18:14.797 "reset": true, 00:18:14.797 "nvme_admin": true, 00:18:14.797 "nvme_io": true, 00:18:14.797 "nvme_io_md": false, 00:18:14.797 "write_zeroes": true, 00:18:14.797 "zcopy": false, 00:18:14.797 "get_zone_info": false, 00:18:14.797 "zone_management": false, 00:18:14.797 "zone_append": false, 00:18:14.797 "compare": true, 00:18:14.797 "compare_and_write": true, 00:18:14.797 "abort": true, 00:18:14.797 "seek_hole": false, 00:18:14.797 "seek_data": false, 00:18:14.797 "copy": true, 00:18:14.797 "nvme_iov_md": false 00:18:14.797 }, 00:18:14.797 "memory_domains": [ 00:18:14.797 { 00:18:14.797 "dma_device_id": "system", 00:18:14.797 "dma_device_type": 1 00:18:14.797 } 00:18:14.797 ], 00:18:14.797 "driver_specific": { 00:18:14.797 "nvme": [ 00:18:14.797 { 00:18:14.797 "trid": { 00:18:14.797 "trtype": "TCP", 00:18:14.797 "adrfam": "IPv4", 00:18:14.797 "traddr": "10.0.0.2", 00:18:14.797 "trsvcid": "4420", 00:18:14.797 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:14.797 }, 00:18:14.797 "ctrlr_data": { 00:18:14.797 "cntlid": 1, 00:18:14.797 "vendor_id": "0x8086", 00:18:14.797 "model_number": "SPDK bdev Controller", 00:18:14.797 "serial_number": "SPDK0", 00:18:14.797 "firmware_revision": "24.09", 00:18:14.797 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:14.797 "oacs": { 00:18:14.797 "security": 0, 00:18:14.797 "format": 0, 00:18:14.797 "firmware": 0, 00:18:14.797 "ns_manage": 0 00:18:14.797 }, 00:18:14.797 "multi_ctrlr": true, 00:18:14.798 "ana_reporting": false 00:18:14.798 }, 00:18:14.798 "vs": { 00:18:14.798 "nvme_version": "1.3" 00:18:14.798 }, 00:18:14.798 "ns_data": { 00:18:14.798 "id": 1, 00:18:14.798 "can_share": true 00:18:14.798 } 00:18:14.798 } 00:18:14.798 ], 00:18:14.798 "mp_policy": "active_passive" 00:18:14.798 } 00:18:14.798 } 00:18:14.798 ] 00:18:14.798 12:08:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1114463 00:18:14.798 12:08:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:14.798 12:08:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.798 Running I/O for 10 seconds... 00:18:16.170 Latency(us) 00:18:16.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.170 Nvme0n1 : 1.00 23006.00 89.87 0.00 0.00 0.00 0.00 0.00 00:18:16.170 =================================================================================================================== 00:18:16.170 Total : 23006.00 89.87 0.00 0.00 0.00 0.00 0.00 00:18:16.170 00:18:16.736 12:08:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:16.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.994 Nvme0n1 : 2.00 23253.00 90.83 0.00 0.00 0.00 0.00 0.00 00:18:16.994 =================================================================================================================== 00:18:16.994 Total : 23253.00 90.83 0.00 0.00 0.00 0.00 0.00 00:18:16.994 00:18:16.994 true 00:18:16.994 12:08:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:16.994 12:08:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:17.251 12:08:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:17.251 12:08:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:17.251 12:08:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1114463 00:18:17.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.817 Nvme0n1 : 3.00 23274.00 90.91 0.00 0.00 0.00 0.00 0.00 00:18:17.817 =================================================================================================================== 00:18:17.817 Total : 23274.00 90.91 0.00 0.00 0.00 0.00 0.00 00:18:17.817 00:18:19.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.192 Nvme0n1 : 4.00 23349.25 91.21 0.00 0.00 0.00 0.00 0.00 00:18:19.192 =================================================================================================================== 00:18:19.192 Total : 23349.25 91.21 0.00 0.00 0.00 0.00 0.00 00:18:19.192 00:18:20.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:20.127 Nvme0n1 : 5.00 23398.60 91.40 0.00 0.00 0.00 0.00 0.00 00:18:20.127 =================================================================================================================== 00:18:20.127 Total : 23398.60 91.40 0.00 0.00 0.00 0.00 0.00 00:18:20.127 00:18:21.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.063 Nvme0n1 : 6.00 23400.67 91.41 0.00 0.00 0.00 0.00 0.00 00:18:21.063 =================================================================================================================== 00:18:21.063 Total : 23400.67 91.41 0.00 0.00 0.00 0.00 0.00 00:18:21.063 00:18:22.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:22.000 Nvme0n1 : 7.00 23441.43 91.57 0.00 0.00 0.00 0.00 0.00 00:18:22.000 =================================================================================================================== 00:18:22.000 Total : 23441.43 91.57 0.00 0.00 0.00 0.00 0.00 00:18:22.000 00:18:22.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:22.936 Nvme0n1 : 8.00 23461.38 91.65 0.00 0.00 0.00 0.00 0.00 00:18:22.936 =================================================================================================================== 00:18:22.936 Total : 23461.38 91.65 0.00 0.00 0.00 0.00 0.00 00:18:22.936 00:18:23.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:23.871 Nvme0n1 : 9.00 23482.11 91.73 0.00 0.00 0.00 0.00 0.00 00:18:23.871 =================================================================================================================== 00:18:23.871 Total : 23482.11 91.73 0.00 0.00 0.00 0.00 0.00 00:18:23.871 00:18:24.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.805 Nvme0n1 : 10.00 23511.60 91.84 0.00 0.00 0.00 0.00 0.00 00:18:24.805 =================================================================================================================== 00:18:24.805 Total : 23511.60 91.84 0.00 0.00 0.00 0.00 0.00 00:18:24.805 00:18:24.805 00:18:24.805 Latency(us) 00:18:24.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.805 Nvme0n1 : 10.00 23515.30 91.86 0.00 0.00 5440.22 3177.07 14588.88 00:18:24.805 =================================================================================================================== 00:18:24.805 Total : 23515.30 91.86 0.00 0.00 5440.22 3177.07 14588.88 00:18:24.805 0 00:18:25.064 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1114230 00:18:25.064 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1114230 ']' 00:18:25.064 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1114230 00:18:25.064 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:18:25.065 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.065 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1114230 00:18:25.065 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:25.065 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:25.065 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1114230' 00:18:25.065 killing process with pid 1114230 00:18:25.065 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1114230 00:18:25.065 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.065 00:18:25.065 Latency(us) 00:18:25.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.065 =================================================================================================================== 00:18:25.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.065 12:08:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1114230 00:18:25.065 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:25.324 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:25.583 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:25.583 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1111355 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1111355 00:18:25.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1111355 Killed "${NVMF_APP[@]}" "$@" 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1116304 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1116304 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1116304 ']' 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.843 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:25.843 [2024-07-15 12:08:15.686157] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:18:25.843 [2024-07-15 12:08:15.686204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.843 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.843 [2024-07-15 12:08:15.756255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.843 [2024-07-15 12:08:15.795791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.843 [2024-07-15 12:08:15.795831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.843 [2024-07-15 12:08:15.795838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.843 [2024-07-15 12:08:15.795844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.843 [2024-07-15 12:08:15.795849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.843 [2024-07-15 12:08:15.795871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.102 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.103 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:18:26.103 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:26.103 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:26.103 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:26.103 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.103 12:08:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:26.103 [2024-07-15 12:08:16.073369] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:26.103 [2024-07-15 12:08:16.073466] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:26.103 [2024-07-15 12:08:16.073490] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:26.103 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:26.103 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f4856f31-f123-4681-9a8b-669199258ca7 00:18:26.103 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f4856f31-f123-4681-9a8b-669199258ca7 00:18:26.103 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:26.103 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:26.103 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:26.103 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:26.103 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:26.362 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f4856f31-f123-4681-9a8b-669199258ca7 -t 2000 00:18:26.622 [ 00:18:26.622 { 00:18:26.622 "name": "f4856f31-f123-4681-9a8b-669199258ca7", 00:18:26.622 "aliases": [ 00:18:26.622 "lvs/lvol" 00:18:26.622 ], 00:18:26.622 "product_name": "Logical Volume", 00:18:26.622 "block_size": 4096, 00:18:26.622 "num_blocks": 38912, 00:18:26.622 "uuid": "f4856f31-f123-4681-9a8b-669199258ca7", 00:18:26.622 "assigned_rate_limits": { 00:18:26.622 "rw_ios_per_sec": 0, 00:18:26.622 "rw_mbytes_per_sec": 0, 00:18:26.622 "r_mbytes_per_sec": 0, 00:18:26.622 "w_mbytes_per_sec": 0 00:18:26.622 }, 00:18:26.622 "claimed": false, 00:18:26.622 "zoned": false, 00:18:26.622 "supported_io_types": { 00:18:26.622 "read": true, 00:18:26.622 "write": true, 00:18:26.622 "unmap": true, 00:18:26.622 "flush": false, 00:18:26.622 "reset": true, 00:18:26.622 "nvme_admin": false, 00:18:26.622 "nvme_io": false, 00:18:26.622 "nvme_io_md": false, 00:18:26.622 "write_zeroes": true, 00:18:26.622 "zcopy": false, 00:18:26.622 "get_zone_info": false, 00:18:26.622 "zone_management": false, 00:18:26.622 "zone_append": false, 00:18:26.622 "compare": false, 00:18:26.622 "compare_and_write": false, 00:18:26.622 "abort": false, 00:18:26.622 "seek_hole": true, 00:18:26.622 "seek_data": true, 00:18:26.622 "copy": false, 00:18:26.622 "nvme_iov_md": false 00:18:26.622 }, 00:18:26.622 "driver_specific": { 00:18:26.622 "lvol": { 00:18:26.622 "lvol_store_uuid": "8ba9c607-e1f4-4d76-b07c-07ce88499209", 00:18:26.622 "base_bdev": "aio_bdev", 00:18:26.622 "thin_provision": false, 00:18:26.622 "num_allocated_clusters": 38, 00:18:26.622 "snapshot": false, 00:18:26.622 "clone": false, 00:18:26.622 "esnap_clone": false 00:18:26.622 } 00:18:26.622 } 00:18:26.622 } 00:18:26.622 ] 00:18:26.622 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:26.622 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:26.622 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:26.622 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:26.622 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:26.622 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:26.881 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:26.881 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:27.141 [2024-07-15 12:08:16.938065] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:27.141 12:08:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:27.400 request: 00:18:27.400 { 00:18:27.400 "uuid": "8ba9c607-e1f4-4d76-b07c-07ce88499209", 00:18:27.400 "method": "bdev_lvol_get_lvstores", 00:18:27.400 "req_id": 1 00:18:27.400 } 00:18:27.400 Got JSON-RPC error response 00:18:27.400 response: 00:18:27.400 { 00:18:27.400 "code": -19, 00:18:27.400 "message": "No such device" 00:18:27.400 } 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:27.400 aio_bdev 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f4856f31-f123-4681-9a8b-669199258ca7 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f4856f31-f123-4681-9a8b-669199258ca7 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:27.400 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:27.660 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f4856f31-f123-4681-9a8b-669199258ca7 -t 2000 00:18:27.660 [ 00:18:27.660 { 00:18:27.660 "name": "f4856f31-f123-4681-9a8b-669199258ca7", 00:18:27.660 "aliases": [ 00:18:27.660 "lvs/lvol" 00:18:27.660 ], 00:18:27.660 "product_name": "Logical Volume", 00:18:27.660 "block_size": 4096, 00:18:27.660 "num_blocks": 38912, 00:18:27.660 "uuid": "f4856f31-f123-4681-9a8b-669199258ca7", 00:18:27.660 "assigned_rate_limits": { 00:18:27.660 "rw_ios_per_sec": 0, 00:18:27.660 "rw_mbytes_per_sec": 0, 00:18:27.660 "r_mbytes_per_sec": 0, 00:18:27.660 "w_mbytes_per_sec": 0 00:18:27.660 }, 00:18:27.660 "claimed": false, 00:18:27.660 "zoned": false, 00:18:27.660 "supported_io_types": { 00:18:27.660 "read": true, 00:18:27.660 "write": true, 00:18:27.660 "unmap": true, 00:18:27.660 "flush": false, 00:18:27.660 "reset": true, 00:18:27.660 "nvme_admin": false, 00:18:27.660 "nvme_io": false, 00:18:27.660 "nvme_io_md": false, 00:18:27.660 "write_zeroes": true, 00:18:27.660 "zcopy": false, 00:18:27.660 "get_zone_info": false, 00:18:27.660 "zone_management": false, 00:18:27.660 "zone_append": false, 00:18:27.660 "compare": false, 00:18:27.660 "compare_and_write": false, 00:18:27.660 "abort": false, 00:18:27.660 "seek_hole": true, 00:18:27.660 "seek_data": true, 00:18:27.660 "copy": false, 00:18:27.660 "nvme_iov_md": false 00:18:27.660 }, 00:18:27.660 "driver_specific": { 00:18:27.660 "lvol": { 00:18:27.660 "lvol_store_uuid": "8ba9c607-e1f4-4d76-b07c-07ce88499209", 00:18:27.660 "base_bdev": "aio_bdev", 00:18:27.660 "thin_provision": false, 00:18:27.660 "num_allocated_clusters": 38, 00:18:27.660 "snapshot": false, 00:18:27.660 "clone": false, 00:18:27.660 "esnap_clone": false 00:18:27.660 } 00:18:27.660 } 00:18:27.660 } 00:18:27.660 ] 00:18:27.919 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:18:27.919 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:27.919 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:27.919 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:27.919 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:27.919 12:08:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:28.178 12:08:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:28.178 12:08:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f4856f31-f123-4681-9a8b-669199258ca7 00:18:28.437 12:08:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ba9c607-e1f4-4d76-b07c-07ce88499209 00:18:28.437 12:08:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:28.698 00:18:28.698 real 0m17.040s 00:18:28.698 user 0m43.995s 00:18:28.698 sys 0m3.822s 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:28.698 ************************************ 00:18:28.698 END TEST lvs_grow_dirty 00:18:28.698 ************************************ 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:28.698 nvmf_trace.0 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.698 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.698 rmmod nvme_tcp 00:18:28.698 rmmod nvme_fabrics 00:18:28.698 rmmod nvme_keyring 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1116304 ']' 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1116304 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1116304 ']' 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1116304 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1116304 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1116304' 00:18:28.957 killing process with pid 1116304 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1116304 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1116304 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.957 12:08:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.522 12:08:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:31.522 00:18:31.522 real 0m41.236s 00:18:31.522 user 1m3.947s 00:18:31.522 sys 0m10.028s 00:18:31.522 12:08:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.522 12:08:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:31.522 ************************************ 00:18:31.522 END TEST nvmf_lvs_grow 00:18:31.522 ************************************ 00:18:31.522 12:08:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:31.522 12:08:21 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:31.522 12:08:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:31.522 12:08:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.522 12:08:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.522 ************************************ 00:18:31.522 START TEST nvmf_bdev_io_wait 00:18:31.522 ************************************ 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:31.522 * Looking for test storage... 00:18:31.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.522 12:08:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:36.815 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:36.815 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:36.815 Found net devices under 0000:86:00.0: cvl_0_0 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:36.815 Found net devices under 0000:86:00.1: cvl_0_1 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:36.815 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.816 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:37.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:18:37.075 00:18:37.075 --- 10.0.0.2 ping statistics --- 00:18:37.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.075 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:18:37.075 00:18:37.075 --- 10.0.0.1 ping statistics --- 00:18:37.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.075 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1120337 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1120337 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1120337 ']' 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.075 12:08:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:37.075 [2024-07-15 12:08:27.047998] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:18:37.075 [2024-07-15 12:08:27.048041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.075 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.334 [2024-07-15 12:08:27.116511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.334 [2024-07-15 12:08:27.159612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.334 [2024-07-15 12:08:27.159649] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.334 [2024-07-15 12:08:27.159656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.334 [2024-07-15 12:08:27.159662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.334 [2024-07-15 12:08:27.159667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.334 [2024-07-15 12:08:27.159725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.334 [2024-07-15 12:08:27.159835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.334 [2024-07-15 12:08:27.159940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.334 [2024-07-15 12:08:27.159942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.902 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.902 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:37.902 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.902 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:37.902 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:37.902 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.902 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:37.902 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.902 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 [2024-07-15 12:08:27.974461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.161 12:08:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 Malloc0 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:38.161 [2024-07-15 12:08:28.030563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1120586 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1120588 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.161 { 00:18:38.161 "params": { 00:18:38.161 "name": "Nvme$subsystem", 00:18:38.161 "trtype": "$TEST_TRANSPORT", 00:18:38.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.161 "adrfam": "ipv4", 00:18:38.161 "trsvcid": "$NVMF_PORT", 00:18:38.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.161 "hdgst": ${hdgst:-false}, 00:18:38.161 "ddgst": ${ddgst:-false} 00:18:38.161 }, 00:18:38.161 "method": "bdev_nvme_attach_controller" 00:18:38.161 } 00:18:38.161 EOF 00:18:38.161 )") 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1120590 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.161 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.161 { 00:18:38.161 "params": { 00:18:38.161 "name": "Nvme$subsystem", 00:18:38.161 "trtype": "$TEST_TRANSPORT", 00:18:38.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.161 "adrfam": "ipv4", 00:18:38.161 "trsvcid": "$NVMF_PORT", 00:18:38.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.162 "hdgst": ${hdgst:-false}, 00:18:38.162 "ddgst": ${ddgst:-false} 00:18:38.162 }, 00:18:38.162 "method": "bdev_nvme_attach_controller" 00:18:38.162 } 00:18:38.162 EOF 00:18:38.162 )") 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1120593 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.162 { 00:18:38.162 "params": { 00:18:38.162 "name": "Nvme$subsystem", 00:18:38.162 "trtype": "$TEST_TRANSPORT", 00:18:38.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.162 "adrfam": "ipv4", 00:18:38.162 "trsvcid": "$NVMF_PORT", 00:18:38.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.162 "hdgst": ${hdgst:-false}, 00:18:38.162 "ddgst": ${ddgst:-false} 00:18:38.162 }, 00:18:38.162 "method": "bdev_nvme_attach_controller" 00:18:38.162 } 00:18:38.162 EOF 00:18:38.162 )") 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.162 { 00:18:38.162 "params": { 00:18:38.162 "name": "Nvme$subsystem", 00:18:38.162 "trtype": "$TEST_TRANSPORT", 00:18:38.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.162 "adrfam": "ipv4", 00:18:38.162 "trsvcid": "$NVMF_PORT", 00:18:38.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.162 "hdgst": ${hdgst:-false}, 00:18:38.162 "ddgst": ${ddgst:-false} 00:18:38.162 }, 00:18:38.162 "method": "bdev_nvme_attach_controller" 00:18:38.162 } 00:18:38.162 EOF 00:18:38.162 )") 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1120586 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:38.162 "params": { 00:18:38.162 "name": "Nvme1", 00:18:38.162 "trtype": "tcp", 00:18:38.162 "traddr": "10.0.0.2", 00:18:38.162 "adrfam": "ipv4", 00:18:38.162 "trsvcid": "4420", 00:18:38.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.162 "hdgst": false, 00:18:38.162 "ddgst": false 00:18:38.162 }, 00:18:38.162 "method": "bdev_nvme_attach_controller" 00:18:38.162 }' 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:38.162 "params": { 00:18:38.162 "name": "Nvme1", 00:18:38.162 "trtype": "tcp", 00:18:38.162 "traddr": "10.0.0.2", 00:18:38.162 "adrfam": "ipv4", 00:18:38.162 "trsvcid": "4420", 00:18:38.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.162 "hdgst": false, 00:18:38.162 "ddgst": false 00:18:38.162 }, 00:18:38.162 "method": "bdev_nvme_attach_controller" 00:18:38.162 }' 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:38.162 "params": { 00:18:38.162 "name": "Nvme1", 00:18:38.162 "trtype": "tcp", 00:18:38.162 "traddr": "10.0.0.2", 00:18:38.162 "adrfam": "ipv4", 00:18:38.162 "trsvcid": "4420", 00:18:38.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.162 "hdgst": false, 00:18:38.162 "ddgst": false 00:18:38.162 }, 00:18:38.162 "method": "bdev_nvme_attach_controller" 00:18:38.162 }' 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:38.162 12:08:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:38.162 "params": { 00:18:38.162 "name": "Nvme1", 00:18:38.162 "trtype": "tcp", 00:18:38.162 "traddr": "10.0.0.2", 00:18:38.162 "adrfam": "ipv4", 00:18:38.162 "trsvcid": "4420", 00:18:38.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.162 "hdgst": false, 00:18:38.162 "ddgst": false 00:18:38.162 }, 00:18:38.162 "method": "bdev_nvme_attach_controller" 00:18:38.162 }' 00:18:38.162 [2024-07-15 12:08:28.080409] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:18:38.162 [2024-07-15 12:08:28.080462] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:38.162 [2024-07-15 12:08:28.082990] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:18:38.162 [2024-07-15 12:08:28.083040] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:38.162 [2024-07-15 12:08:28.083449] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:18:38.162 [2024-07-15 12:08:28.083494] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:38.162 [2024-07-15 12:08:28.084174] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:18:38.162 [2024-07-15 12:08:28.084215] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:38.162 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.420 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.420 [2024-07-15 12:08:28.228705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.420 [2024-07-15 12:08:28.253301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:38.420 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.420 [2024-07-15 12:08:28.319223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.420 [2024-07-15 12:08:28.346341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:38.420 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.420 [2024-07-15 12:08:28.411832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.679 [2024-07-15 12:08:28.444659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:38.679 [2024-07-15 12:08:28.470050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.679 [2024-07-15 12:08:28.497390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:38.679 Running I/O for 1 seconds... 00:18:38.937 Running I/O for 1 seconds... 00:18:38.937 Running I/O for 1 seconds... 00:18:38.937 Running I/O for 1 seconds... 00:18:39.871 00:18:39.871 Latency(us) 00:18:39.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.871 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:39.872 Nvme1n1 : 1.01 12466.51 48.70 0.00 0.00 10228.11 6468.12 17324.30 00:18:39.872 =================================================================================================================== 00:18:39.872 Total : 12466.51 48.70 0.00 0.00 10228.11 6468.12 17324.30 00:18:39.872 00:18:39.872 Latency(us) 00:18:39.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.872 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:39.872 Nvme1n1 : 1.01 9980.69 38.99 0.00 0.00 12781.10 6183.18 22567.18 00:18:39.872 =================================================================================================================== 00:18:39.872 Total : 9980.69 38.99 0.00 0.00 12781.10 6183.18 22567.18 00:18:39.872 00:18:39.872 Latency(us) 00:18:39.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.872 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:39.872 Nvme1n1 : 1.00 11539.50 45.08 0.00 0.00 11064.77 4445.05 22453.20 00:18:39.872 =================================================================================================================== 00:18:39.872 Total : 11539.50 45.08 0.00 0.00 11064.77 4445.05 22453.20 00:18:39.872 00:18:39.872 Latency(us) 00:18:39.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.872 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:39.872 Nvme1n1 : 1.00 245989.58 960.90 0.00 0.00 517.80 216.38 690.98 00:18:39.872 =================================================================================================================== 00:18:39.872 Total : 245989.58 960.90 0.00 0.00 517.80 216.38 690.98 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1120588 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1120590 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1120593 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.130 rmmod nvme_tcp 00:18:40.130 rmmod nvme_fabrics 00:18:40.130 rmmod nvme_keyring 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1120337 ']' 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1120337 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1120337 ']' 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1120337 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.130 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1120337 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1120337' 00:18:40.388 killing process with pid 1120337 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1120337 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1120337 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.388 12:08:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.938 12:08:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.938 00:18:42.938 real 0m11.314s 00:18:42.938 user 0m19.534s 00:18:42.938 sys 0m6.095s 00:18:42.938 12:08:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.938 12:08:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:42.938 ************************************ 00:18:42.938 END TEST nvmf_bdev_io_wait 00:18:42.938 ************************************ 00:18:42.938 12:08:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:42.939 12:08:32 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:42.939 12:08:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:42.939 12:08:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.939 12:08:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.939 ************************************ 00:18:42.939 START TEST nvmf_queue_depth 00:18:42.939 ************************************ 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:42.939 * Looking for test storage... 00:18:42.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.939 12:08:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.205 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.205 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.205 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.205 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.205 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:48.206 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:48.206 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:48.206 Found net devices under 0000:86:00.0: cvl_0_0 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:48.206 Found net devices under 0000:86:00.1: cvl_0_1 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.206 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:48.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:18:48.465 00:18:48.465 --- 10.0.0.2 ping statistics --- 00:18:48.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.465 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:18:48.465 00:18:48.465 --- 10.0.0.1 ping statistics --- 00:18:48.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.465 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1124369 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1124369 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1124369 ']' 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.465 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.465 [2024-07-15 12:08:38.438620] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:18:48.465 [2024-07-15 12:08:38.438668] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.465 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.724 [2024-07-15 12:08:38.511577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.724 [2024-07-15 12:08:38.551526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.724 [2024-07-15 12:08:38.551564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.724 [2024-07-15 12:08:38.551571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.724 [2024-07-15 12:08:38.551578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.724 [2024-07-15 12:08:38.551583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.724 [2024-07-15 12:08:38.551602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.724 [2024-07-15 12:08:38.675876] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.724 Malloc0 00:18:48.724 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.983 [2024-07-15 12:08:38.748624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1124398 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1124398 /var/tmp/bdevperf.sock 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1124398 ']' 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:48.983 [2024-07-15 12:08:38.797683] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:18:48.983 [2024-07-15 12:08:38.797722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124398 ] 00:18:48.983 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.983 [2024-07-15 12:08:38.850220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.983 [2024-07-15 12:08:38.890321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.983 12:08:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:49.242 NVMe0n1 00:18:49.242 12:08:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.242 12:08:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:49.242 Running I/O for 10 seconds... 00:18:59.288 00:18:59.288 Latency(us) 00:18:59.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.288 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:59.288 Verification LBA range: start 0x0 length 0x4000 00:18:59.288 NVMe0n1 : 10.06 12293.31 48.02 0.00 0.00 83037.95 18692.01 54252.41 00:18:59.288 =================================================================================================================== 00:18:59.288 Total : 12293.31 48.02 0.00 0.00 83037.95 18692.01 54252.41 00:18:59.288 0 00:18:59.288 12:08:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1124398 00:18:59.288 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1124398 ']' 00:18:59.288 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1124398 00:18:59.289 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1124398 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1124398' 00:18:59.547 killing process with pid 1124398 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1124398 00:18:59.547 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.547 00:18:59.547 Latency(us) 00:18:59.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.547 =================================================================================================================== 00:18:59.547 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1124398 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:59.547 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:59.548 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:59.548 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:59.548 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:59.548 rmmod nvme_tcp 00:18:59.548 rmmod nvme_fabrics 00:18:59.548 rmmod nvme_keyring 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1124369 ']' 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1124369 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1124369 ']' 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1124369 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1124369 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1124369' 00:18:59.806 killing process with pid 1124369 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1124369 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1124369 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.806 12:08:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.341 12:08:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:02.341 00:19:02.341 real 0m19.410s 00:19:02.341 user 0m22.877s 00:19:02.341 sys 0m5.819s 00:19:02.341 12:08:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:02.341 12:08:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:02.341 ************************************ 00:19:02.341 END TEST nvmf_queue_depth 00:19:02.341 ************************************ 00:19:02.341 12:08:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:02.341 12:08:51 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:02.341 12:08:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:02.341 12:08:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.341 12:08:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:02.341 ************************************ 00:19:02.341 START TEST nvmf_target_multipath 00:19:02.341 ************************************ 00:19:02.341 12:08:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:02.341 * Looking for test storage... 00:19:02.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.341 12:08:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:19:02.342 12:08:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:07.614 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:07.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:07.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:07.615 Found net devices under 0000:86:00.0: cvl_0_0 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:07.615 Found net devices under 0000:86:00.1: cvl_0_1 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.615 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:07.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:19:07.874 00:19:07.874 --- 10.0.0.2 ping statistics --- 00:19:07.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.874 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:19:07.874 00:19:07.874 --- 10.0.0.1 ping statistics --- 00:19:07.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.874 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:07.874 only one NIC for nvmf test 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.874 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.133 rmmod nvme_tcp 00:19:08.133 rmmod nvme_fabrics 00:19:08.133 rmmod nvme_keyring 00:19:08.133 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.133 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:08.133 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:08.133 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:08.133 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:08.133 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:08.134 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:08.134 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.134 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.134 12:08:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.134 12:08:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.134 12:08:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.039 12:08:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:10.039 12:08:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:19:10.039 12:08:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:10.039 12:08:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:10.039 12:08:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:10.039 12:08:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:10.039 12:08:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:10.039 12:08:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:10.039 12:08:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:10.039 00:19:10.039 real 0m8.091s 00:19:10.039 user 0m1.634s 00:19:10.039 sys 0m4.441s 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:10.039 12:09:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:10.039 ************************************ 00:19:10.039 END TEST nvmf_target_multipath 00:19:10.039 ************************************ 00:19:10.298 12:09:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:10.298 12:09:00 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:10.298 12:09:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:10.298 12:09:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.298 12:09:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:10.298 ************************************ 00:19:10.298 START TEST nvmf_zcopy 00:19:10.298 ************************************ 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:10.298 * Looking for test storage... 00:19:10.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:10.298 12:09:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:10.299 12:09:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:16.863 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:16.864 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:16.864 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:16.864 Found net devices under 0000:86:00.0: cvl_0_0 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:16.864 Found net devices under 0000:86:00.1: cvl_0_1 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:16.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:19:16.864 00:19:16.864 --- 10.0.0.2 ping statistics --- 00:19:16.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.864 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:16.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:19:16.864 00:19:16.864 --- 10.0.0.1 ping statistics --- 00:19:16.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.864 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1133153 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1133153 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1133153 ']' 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.864 12:09:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.864 [2024-07-15 12:09:06.021293] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:19:16.864 [2024-07-15 12:09:06.021339] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.864 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.864 [2024-07-15 12:09:06.093782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.864 [2024-07-15 12:09:06.133420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.864 [2024-07-15 12:09:06.133461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.864 [2024-07-15 12:09:06.133468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.864 [2024-07-15 12:09:06.133475] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.864 [2024-07-15 12:09:06.133481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.864 [2024-07-15 12:09:06.133506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.864 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:16.864 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:19:16.864 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:16.864 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:16.864 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.865 [2024-07-15 12:09:06.270462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.865 [2024-07-15 12:09:06.290635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.865 malloc0 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:16.865 { 00:19:16.865 "params": { 00:19:16.865 "name": "Nvme$subsystem", 00:19:16.865 "trtype": "$TEST_TRANSPORT", 00:19:16.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:16.865 "adrfam": "ipv4", 00:19:16.865 "trsvcid": "$NVMF_PORT", 00:19:16.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:16.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:16.865 "hdgst": ${hdgst:-false}, 00:19:16.865 "ddgst": ${ddgst:-false} 00:19:16.865 }, 00:19:16.865 "method": "bdev_nvme_attach_controller" 00:19:16.865 } 00:19:16.865 EOF 00:19:16.865 )") 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:16.865 12:09:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:16.865 "params": { 00:19:16.865 "name": "Nvme1", 00:19:16.865 "trtype": "tcp", 00:19:16.865 "traddr": "10.0.0.2", 00:19:16.865 "adrfam": "ipv4", 00:19:16.865 "trsvcid": "4420", 00:19:16.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.865 "hdgst": false, 00:19:16.865 "ddgst": false 00:19:16.865 }, 00:19:16.865 "method": "bdev_nvme_attach_controller" 00:19:16.865 }' 00:19:16.865 [2024-07-15 12:09:06.373206] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:19:16.865 [2024-07-15 12:09:06.373265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133338 ] 00:19:16.865 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.865 [2024-07-15 12:09:06.441951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.865 [2024-07-15 12:09:06.482490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.865 Running I/O for 10 seconds... 00:19:26.843 00:19:26.843 Latency(us) 00:19:26.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.843 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:26.843 Verification LBA range: start 0x0 length 0x1000 00:19:26.844 Nvme1n1 : 10.01 8641.01 67.51 0.00 0.00 14770.69 2037.31 25302.59 00:19:26.844 =================================================================================================================== 00:19:26.844 Total : 8641.01 67.51 0.00 0.00 14770.69 2037.31 25302.59 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1135392 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:27.102 { 00:19:27.102 "params": { 00:19:27.102 "name": "Nvme$subsystem", 00:19:27.102 "trtype": "$TEST_TRANSPORT", 00:19:27.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.102 "adrfam": "ipv4", 00:19:27.102 "trsvcid": "$NVMF_PORT", 00:19:27.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.102 "hdgst": ${hdgst:-false}, 00:19:27.102 "ddgst": ${ddgst:-false} 00:19:27.102 }, 00:19:27.102 "method": "bdev_nvme_attach_controller" 00:19:27.102 } 00:19:27.102 EOF 00:19:27.102 )") 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:27.102 [2024-07-15 12:09:17.014151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.102 [2024-07-15 12:09:17.014184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:27.102 12:09:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:27.102 "params": { 00:19:27.102 "name": "Nvme1", 00:19:27.102 "trtype": "tcp", 00:19:27.102 "traddr": "10.0.0.2", 00:19:27.102 "adrfam": "ipv4", 00:19:27.102 "trsvcid": "4420", 00:19:27.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.102 "hdgst": false, 00:19:27.102 "ddgst": false 00:19:27.102 }, 00:19:27.102 "method": "bdev_nvme_attach_controller" 00:19:27.102 }' 00:19:27.102 [2024-07-15 12:09:17.026142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.102 [2024-07-15 12:09:17.026157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.102 [2024-07-15 12:09:17.038171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.102 [2024-07-15 12:09:17.038181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.102 [2024-07-15 12:09:17.050206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.102 [2024-07-15 12:09:17.050216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.102 [2024-07-15 12:09:17.052051] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:19:27.102 [2024-07-15 12:09:17.052096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135392 ] 00:19:27.102 [2024-07-15 12:09:17.062251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.102 [2024-07-15 12:09:17.062266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.102 [2024-07-15 12:09:17.074271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.102 [2024-07-15 12:09:17.074281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.102 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.102 [2024-07-15 12:09:17.086302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.102 [2024-07-15 12:09:17.086314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.102 [2024-07-15 12:09:17.098334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.102 [2024-07-15 12:09:17.098344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.110366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.110376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.119675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.361 [2024-07-15 12:09:17.122397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.122407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.134429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.134442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.146464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.146485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.158493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.158504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.160248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.361 [2024-07-15 12:09:17.170532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.170549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.182567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.182584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.194595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.194607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.206621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.206632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.218658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.218670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.230688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.230699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.242744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.242765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.254762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.254784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.266794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.266808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.278831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.278847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.290858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.290869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.302893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.302905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.314921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.314932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.326954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.326969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 [2024-07-15 12:09:17.338995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.339012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.361 Running I/O for 5 seconds... 00:19:27.361 [2024-07-15 12:09:17.351025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.361 [2024-07-15 12:09:17.351038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.363218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.363247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.372041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.372063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.386204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.386234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.395002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.395024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.403988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.404009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.413373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.413394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.422661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.422681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.437314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.437333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.446333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.446352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.460627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.460647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.473941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.473960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.482770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.482789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.497571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.497590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.511423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.511443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.520303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.520322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.528972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.528991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.543173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.543192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.620 [2024-07-15 12:09:17.556677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.620 [2024-07-15 12:09:17.556695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.621 [2024-07-15 12:09:17.565615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.621 [2024-07-15 12:09:17.565634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.621 [2024-07-15 12:09:17.580030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.621 [2024-07-15 12:09:17.580051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.621 [2024-07-15 12:09:17.590896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.621 [2024-07-15 12:09:17.590917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.621 [2024-07-15 12:09:17.599764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.621 [2024-07-15 12:09:17.599783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.621 [2024-07-15 12:09:17.614352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.621 [2024-07-15 12:09:17.614372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.628311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.628331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.637295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.637315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.645998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.646017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.655235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.655254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.669724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.669743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.683203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.683223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.697248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.697267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.711321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.711342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.720129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.720148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.734306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.734325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.743328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.743347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.751958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.751977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.761024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.761043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.769627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.769645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.783731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.783750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.797699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.797718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.811883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.811902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.826037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.826058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.837123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.837145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.851136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.851155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.859925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.859943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.868685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.868703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.889 [2024-07-15 12:09:17.878045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.889 [2024-07-15 12:09:17.878065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.886966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.886986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.901786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.901805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.912716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.912735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.921555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.921573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.930383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.930401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.945047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.945067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.955692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.955712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.964250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.964269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.973301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.973319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.981849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.187 [2024-07-15 12:09:17.981868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.187 [2024-07-15 12:09:17.991070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:17.991089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.005571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.005591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.014421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.014439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.022999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.023019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.032298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.032316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.041462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.041480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.055972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.055991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.065003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.065022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.073713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.073732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.082831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.082851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.092264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.092283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.106615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.106634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.120350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.120370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.134332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.134352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.143417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.143436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.152390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.152409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.188 [2024-07-15 12:09:18.167191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.188 [2024-07-15 12:09:18.167211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.177480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.177499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.186403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.186422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.195390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.195409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.204455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.204474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.219134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.219153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.232752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.232771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.246443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.246463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.260281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.260301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.269316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.269334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.283759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.283779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.292886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.448 [2024-07-15 12:09:18.292904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.448 [2024-07-15 12:09:18.301855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.301874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.311087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.311111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.319605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.319625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.334265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.334286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.343324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.343342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.352186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.352205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.367015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.367034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.378330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.378349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.392526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.392546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.401569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.401588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.415508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.415528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.429356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.429375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.449 [2024-07-15 12:09:18.438318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.449 [2024-07-15 12:09:18.438338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.708 [2024-07-15 12:09:18.453034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.708 [2024-07-15 12:09:18.453053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.708 [2024-07-15 12:09:18.466623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.708 [2024-07-15 12:09:18.466642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.708 [2024-07-15 12:09:18.480856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.708 [2024-07-15 12:09:18.480875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.708 [2024-07-15 12:09:18.491873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.708 [2024-07-15 12:09:18.491892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.708 [2024-07-15 12:09:18.501216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.708 [2024-07-15 12:09:18.501243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.708 [2024-07-15 12:09:18.515269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.708 [2024-07-15 12:09:18.515288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.708 [2024-07-15 12:09:18.524047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.708 [2024-07-15 12:09:18.524066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.533497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.533521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.547797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.547817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.556800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.556818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.571053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.571073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.579838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.579857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.594145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.594163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.607106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.607126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.616139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.616157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.630404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.630422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.639131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.639149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.653393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.653411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.662040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.662058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.670817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.670835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.685209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.685234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.693963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.693982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.709 [2024-07-15 12:09:18.702560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.709 [2024-07-15 12:09:18.702579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.711880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.711899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.720993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.721012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.735737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.735755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.749567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.749590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.758333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.758352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.766955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.766975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.781403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.781423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.794876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.794898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.803993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.804014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.818258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.818279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.831789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.831809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.845451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.845471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.859137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.859156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.872569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.872588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.886179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.886199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.900102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.900121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.914291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.914310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.928090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.928109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.936990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.937009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.946208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.946235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.968 [2024-07-15 12:09:18.955559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.968 [2024-07-15 12:09:18.955577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:18.970557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:18.970577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:18.985986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:18.986010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:18.994821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:18.994841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.009182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.009203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.018169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.018188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.026975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.026994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.041519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.041539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.055127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.055146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.069273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.069292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.078127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.078146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.087270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.087288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.101935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.101955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.112338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.112357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.121542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.121562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.130231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.130249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.139423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.139442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.153970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.153990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.167822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.167841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.176602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.176621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.191092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.191111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.204635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.204654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.218097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.218116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.228 [2024-07-15 12:09:19.226877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.228 [2024-07-15 12:09:19.226896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.236327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.236346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.245527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.245545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.254724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.254743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.269110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.269129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.282841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.282860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.291853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.291871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.306319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.306338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.320240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.320259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.334000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.334019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.347907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.347928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.356671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.356690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.365656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.365675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.374962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.374980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.389337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.389356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.402785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.402805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.411803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.411821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.420980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.420998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.435728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.435747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.446672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.446692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.461083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.461102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.488 [2024-07-15 12:09:19.475280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.488 [2024-07-15 12:09:19.475298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.490552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.490573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.499457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.499475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.513751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.513770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.527468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.527487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.536090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.536109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.544598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.544617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.553083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.553102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.567651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.567670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.576420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.576438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.585611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.585630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.594779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.594798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.603857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.603875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.618006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.618026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.748 [2024-07-15 12:09:19.626749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.748 [2024-07-15 12:09:19.626767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.635924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.635942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.650535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.650554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.661190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.661209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.675317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.675336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.684032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.684051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.697888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.697907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.706701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.706720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.721244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.721264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.735294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.735313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.749 [2024-07-15 12:09:19.749343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.749 [2024-07-15 12:09:19.749363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.763269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.763289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.776785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.776804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.790362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.790381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.804126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.804145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.817655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.817674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.826716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.826735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.840795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.840814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.854681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.854700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.868467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.868486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.877219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.877244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.886484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.886503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.900719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.900738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.914429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.914448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.928171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.928190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.937280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.937299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.946634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.946653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.955383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.955402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.964683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.964701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.979188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.979208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:19.992641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:19.992660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.008 [2024-07-15 12:09:20.001530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.008 [2024-07-15 12:09:20.001553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.010347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.010367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.019712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.019732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.034450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.034470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.045338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.045357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.054201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.054220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.068610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.068629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.082427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.082452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.096811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.096830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.112200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.112220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.126258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.126278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.140210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.140234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.154549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.154568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.170504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.170524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.184546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.184567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.193364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.193383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.201955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.201974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.216039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.216060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.229824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.229845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.244096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.244116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.252999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.253019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.268 [2024-07-15 12:09:20.267370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.268 [2024-07-15 12:09:20.267389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.281214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.281240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.295485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.295505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.306579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.306599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.320870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.320889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.334661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.334689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.343549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.343569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.358257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.358277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.372188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.372209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.381135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.381154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.389929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.389948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.404558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.404578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.415164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.415185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.429768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.429787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.440152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.440172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.449391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.449411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.458067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.458088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.472425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.472445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.486507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.486527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.497218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.497243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.506198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.506217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.527 [2024-07-15 12:09:20.520776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.527 [2024-07-15 12:09:20.520795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.534200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.534221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.542937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.542957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.552083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.552107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.561379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.561399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.570851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.570870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.585451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.585471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.594320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.594338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.603103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.603121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.612420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.612439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.621622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.621640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.636820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.636839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.647446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.647465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.662347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.662367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.677682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.677705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.691577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.691596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.705647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.705665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.720245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.720264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.729060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.729078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.738124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.738142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.746850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.746868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.761255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.761274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.770291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.770314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.786 [2024-07-15 12:09:20.784413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.786 [2024-07-15 12:09:20.784433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.793547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.793567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.802286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.802306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.816907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.816926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.830791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.830810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.841520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.841538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.850712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.850730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.859509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.859528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.874149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.874167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.887664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.887683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.896809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.896827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.905383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.905403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.914369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.914387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.923934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.923953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.938371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.938390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.947132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.947151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.961389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.961408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.970488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.970506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.985132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.985154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:20.998254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:20.998274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:21.012072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:21.012091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:21.026305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:21.026325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.044 [2024-07-15 12:09:21.040349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.044 [2024-07-15 12:09:21.040369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.051290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.051310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.065582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.065601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.074675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.074693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.088804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.088824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.102114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.102133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.115888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.115907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.124806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.124825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.139663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.139681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.154592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.154610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.163503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.163521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.178105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.178124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.187032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.187050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.196453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.196471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.205265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.205283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.214512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.214531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.228805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.228824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.237740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.237759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.251653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.251672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.265509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.265528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.274330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.274349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.289031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.289050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.302 [2024-07-15 12:09:21.302972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.302 [2024-07-15 12:09:21.302991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.316680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.316699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.330898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.330918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.339801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.339819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.354170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.354189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.362976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.362994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.377984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.378005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.393641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.393661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.402528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.402546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.417213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.417237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.426113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.426132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.435079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.435097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.449700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.449720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.463278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.463297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.477587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.477607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.491154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.491174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.500116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.500135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.514384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.514404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.527623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.527643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.541632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.541652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.550722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.550742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.561 [2024-07-15 12:09:21.560086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.561 [2024-07-15 12:09:21.560106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.574856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.574875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.586086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.586106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.600310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.600332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.613406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.613425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.627565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.627586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.641487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.641508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.655237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.655258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.669126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.669146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.683146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.683166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.696760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.696779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.819 [2024-07-15 12:09:21.705730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.819 [2024-07-15 12:09:21.705750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.820 [2024-07-15 12:09:21.715064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.820 [2024-07-15 12:09:21.715084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.820 [2024-07-15 12:09:21.729098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.820 [2024-07-15 12:09:21.729117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.820 [2024-07-15 12:09:21.742389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.820 [2024-07-15 12:09:21.742408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.820 [2024-07-15 12:09:21.751721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.820 [2024-07-15 12:09:21.751740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.820 [2024-07-15 12:09:21.766081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.820 [2024-07-15 12:09:21.766101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.820 [2024-07-15 12:09:21.774964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.820 [2024-07-15 12:09:21.774983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.820 [2024-07-15 12:09:21.789195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.820 [2024-07-15 12:09:21.789214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.820 [2024-07-15 12:09:21.798286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.820 [2024-07-15 12:09:21.798304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:31.820 [2024-07-15 12:09:21.812209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:31.820 [2024-07-15 12:09:21.812235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.826304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.826324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.837650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.837670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.851733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.851753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.864988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.865008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.879143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.879162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.888182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.888200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.902736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.902754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.916437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.916461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.930901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.930920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.945105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.945124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.954004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.954022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.963211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.963235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.977455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.977474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.986265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.986283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:21.994831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:21.994850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:22.003922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:22.003941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:22.018147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:22.018166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:22.031511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:22.031529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:22.040377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:22.040401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:22.054905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:22.054924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:22.064105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:22.064124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.078 [2024-07-15 12:09:22.078504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.078 [2024-07-15 12:09:22.078522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.336 [2024-07-15 12:09:22.092360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.336 [2024-07-15 12:09:22.092380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.336 [2024-07-15 12:09:22.101205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.336 [2024-07-15 12:09:22.101223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.336 [2024-07-15 12:09:22.110510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.336 [2024-07-15 12:09:22.110529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.336 [2024-07-15 12:09:22.119315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.336 [2024-07-15 12:09:22.119333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.336 [2024-07-15 12:09:22.134045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.336 [2024-07-15 12:09:22.134069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.144512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.144531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.153757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.153776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.162410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.162429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.171638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.171656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.186863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.186882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.201583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.201601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.210485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.210503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.224957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.224976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.238838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.238856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.252574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.252592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.266541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.266560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.275317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.275335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.289539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.289558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.303165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.303184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.312130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.312149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.337 [2024-07-15 12:09:22.329571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.337 [2024-07-15 12:09:22.329590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.338299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.338319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.347403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.347422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.356439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.356462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.369131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.369166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 00:19:32.597 Latency(us) 00:19:32.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.597 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:32.597 Nvme1n1 : 5.01 16779.03 131.09 0.00 0.00 7620.87 3348.03 18008.15 00:19:32.597 =================================================================================================================== 00:19:32.597 Total : 16779.03 131.09 0.00 0.00 7620.87 3348.03 18008.15 00:19:32.597 [2024-07-15 12:09:22.379080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.379095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.391117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.391134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.403154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.403174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.415182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.415197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.427204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.427217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.439242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.439256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.451275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.451288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.463304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.463317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.475336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.475346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.487365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.487377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.499400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.499412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.511429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.511439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.523463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.523474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.535494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.535505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 [2024-07-15 12:09:22.547525] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:32.597 [2024-07-15 12:09:22.547540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1135392) - No such process 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1135392 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:32.597 delay0 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.597 12:09:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:32.856 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.856 [2024-07-15 12:09:22.679635] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:39.414 Initializing NVMe Controllers 00:19:39.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:39.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:39.414 Initialization complete. Launching workers. 00:19:39.414 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 321 00:19:39.414 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 574, failed to submit 67 00:19:39.414 success 422, unsuccess 152, failed 0 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.414 rmmod nvme_tcp 00:19:39.414 rmmod nvme_fabrics 00:19:39.414 rmmod nvme_keyring 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1133153 ']' 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1133153 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1133153 ']' 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1133153 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1133153 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1133153' 00:19:39.414 killing process with pid 1133153 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1133153 00:19:39.414 12:09:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1133153 00:19:39.414 12:09:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.414 12:09:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.414 12:09:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.414 12:09:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.414 12:09:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.414 12:09:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.414 12:09:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.414 12:09:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.321 12:09:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:41.321 00:19:41.321 real 0m31.009s 00:19:41.321 user 0m41.989s 00:19:41.321 sys 0m10.606s 00:19:41.321 12:09:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.321 12:09:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:41.321 ************************************ 00:19:41.321 END TEST nvmf_zcopy 00:19:41.321 ************************************ 00:19:41.321 12:09:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:41.321 12:09:31 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:41.321 12:09:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:41.321 12:09:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.321 12:09:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:41.321 ************************************ 00:19:41.321 START TEST nvmf_nmic 00:19:41.321 ************************************ 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:41.321 * Looking for test storage... 00:19:41.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.321 12:09:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:47.891 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:47.891 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:47.891 Found net devices under 0000:86:00.0: cvl_0_0 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:47.891 Found net devices under 0000:86:00.1: cvl_0_1 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:47.891 12:09:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:47.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:19:47.891 00:19:47.891 --- 10.0.0.2 ping statistics --- 00:19:47.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.891 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:19:47.891 00:19:47.891 --- 10.0.0.1 ping statistics --- 00:19:47.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.891 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1140750 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1140750 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1140750 ']' 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.891 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.891 [2024-07-15 12:09:37.127243] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:19:47.892 [2024-07-15 12:09:37.127293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.892 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.892 [2024-07-15 12:09:37.200466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:47.892 [2024-07-15 12:09:37.243771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.892 [2024-07-15 12:09:37.243811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.892 [2024-07-15 12:09:37.243819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.892 [2024-07-15 12:09:37.243825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.892 [2024-07-15 12:09:37.243830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.892 [2024-07-15 12:09:37.247245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.892 [2024-07-15 12:09:37.247272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.892 [2024-07-15 12:09:37.247390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.892 [2024-07-15 12:09:37.247389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.150 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.150 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:19:48.151 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.151 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.151 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 12:09:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.151 12:09:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.151 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.151 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 [2024-07-15 12:09:37.995348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.151 12:09:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 Malloc0 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 [2024-07-15 12:09:38.046849] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:48.151 test case1: single bdev can't be used in multiple subsystems 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 [2024-07-15 12:09:38.070764] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:48.151 [2024-07-15 12:09:38.070782] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:48.151 [2024-07-15 12:09:38.070794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:48.151 request: 00:19:48.151 { 00:19:48.151 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:48.151 "namespace": { 00:19:48.151 "bdev_name": "Malloc0", 00:19:48.151 "no_auto_visible": false 00:19:48.151 }, 00:19:48.151 "method": "nvmf_subsystem_add_ns", 00:19:48.151 "req_id": 1 00:19:48.151 } 00:19:48.151 Got JSON-RPC error response 00:19:48.151 response: 00:19:48.151 { 00:19:48.151 "code": -32602, 00:19:48.151 "message": "Invalid parameters" 00:19:48.151 } 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:48.151 Adding namespace failed - expected result. 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:48.151 test case2: host connect to nvmf target in multiple paths 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:48.151 [2024-07-15 12:09:38.082897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.151 12:09:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:49.529 12:09:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:50.468 12:09:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:50.468 12:09:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:19:50.468 12:09:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:50.468 12:09:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:50.468 12:09:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:19:53.002 12:09:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:53.002 12:09:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:53.002 12:09:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:53.002 12:09:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:53.002 12:09:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:53.002 12:09:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:19:53.002 12:09:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:53.002 [global] 00:19:53.002 thread=1 00:19:53.002 invalidate=1 00:19:53.002 rw=write 00:19:53.002 time_based=1 00:19:53.002 runtime=1 00:19:53.002 ioengine=libaio 00:19:53.002 direct=1 00:19:53.002 bs=4096 00:19:53.002 iodepth=1 00:19:53.002 norandommap=0 00:19:53.002 numjobs=1 00:19:53.002 00:19:53.002 verify_dump=1 00:19:53.002 verify_backlog=512 00:19:53.002 verify_state_save=0 00:19:53.002 do_verify=1 00:19:53.002 verify=crc32c-intel 00:19:53.002 [job0] 00:19:53.002 filename=/dev/nvme0n1 00:19:53.002 Could not set queue depth (nvme0n1) 00:19:53.002 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.002 fio-3.35 00:19:53.002 Starting 1 thread 00:19:53.939 00:19:53.939 job0: (groupid=0, jobs=1): err= 0: pid=1141829: Mon Jul 15 12:09:43 2024 00:19:53.939 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:19:53.939 slat (nsec): min=6590, max=28766, avg=7343.72, stdev=924.88 00:19:53.939 clat (usec): min=222, max=685, avg=273.80, stdev=35.46 00:19:53.939 lat (usec): min=228, max=714, avg=281.14, stdev=35.62 00:19:53.939 clat percentiles (usec): 00:19:53.939 | 1.00th=[ 231], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 262], 00:19:53.939 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 269], 00:19:53.939 | 70.00th=[ 273], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 322], 00:19:53.939 | 99.00th=[ 441], 99.50th=[ 445], 99.90th=[ 457], 99.95th=[ 461], 00:19:53.939 | 99.99th=[ 685] 00:19:53.939 write: IOPS=2416, BW=9666KiB/s (9898kB/s)(9676KiB/1001msec); 0 zone resets 00:19:53.939 slat (usec): min=9, max=25453, avg=20.91, stdev=517.31 00:19:53.939 clat (usec): min=128, max=392, avg=150.43, stdev=10.12 00:19:53.939 lat (usec): min=139, max=25824, avg=171.34, stdev=521.89 00:19:53.939 clat percentiles (usec): 00:19:53.939 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:19:53.939 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 149], 60.00th=[ 151], 00:19:53.939 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 163], 00:19:53.939 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 265], 99.95th=[ 371], 00:19:53.939 | 99.99th=[ 392] 00:19:53.939 bw ( KiB/s): min= 8766, max= 8766, per=90.69%, avg=8766.00, stdev= 0.00, samples=1 00:19:53.939 iops : min= 2191, max= 2191, avg=2191.00, stdev= 0.00, samples=1 00:19:53.939 lat (usec) : 250=56.39%, 500=43.59%, 750=0.02% 00:19:53.939 cpu : usr=2.40%, sys=3.90%, ctx=4472, majf=0, minf=2 00:19:53.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:53.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.939 issued rwts: total=2048,2419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:53.939 00:19:53.939 Run status group 0 (all jobs): 00:19:53.939 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:19:53.939 WRITE: bw=9666KiB/s (9898kB/s), 9666KiB/s-9666KiB/s (9898kB/s-9898kB/s), io=9676KiB (9908kB), run=1001-1001msec 00:19:53.939 00:19:53.939 Disk stats (read/write): 00:19:53.939 nvme0n1: ios=1926/2048, merge=0/0, ticks=1496/300, in_queue=1796, util=98.60% 00:19:53.939 12:09:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:54.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.198 rmmod nvme_tcp 00:19:54.198 rmmod nvme_fabrics 00:19:54.198 rmmod nvme_keyring 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1140750 ']' 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1140750 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1140750 ']' 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1140750 00:19:54.198 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1140750 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1140750' 00:19:54.457 killing process with pid 1140750 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1140750 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1140750 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.457 12:09:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.992 12:09:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.992 00:19:56.992 real 0m15.337s 00:19:56.992 user 0m35.700s 00:19:56.992 sys 0m5.126s 00:19:56.992 12:09:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.992 12:09:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:56.992 ************************************ 00:19:56.992 END TEST nvmf_nmic 00:19:56.992 ************************************ 00:19:56.992 12:09:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:56.992 12:09:46 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:56.992 12:09:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:56.992 12:09:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.992 12:09:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.992 ************************************ 00:19:56.992 START TEST nvmf_fio_target 00:19:56.992 ************************************ 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:56.992 * Looking for test storage... 00:19:56.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.992 12:09:46 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.993 12:09:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.290 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:02.291 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:02.291 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:02.291 Found net devices under 0000:86:00.0: cvl_0_0 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:02.291 Found net devices under 0000:86:00.1: cvl_0_1 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.291 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:20:02.551 00:20:02.551 --- 10.0.0.2 ping statistics --- 00:20:02.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.551 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:20:02.551 00:20:02.551 --- 10.0.0.1 ping statistics --- 00:20:02.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.551 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1145578 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1145578 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1145578 ']' 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.551 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.551 [2024-07-15 12:09:52.538080] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:20:02.551 [2024-07-15 12:09:52.538122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.810 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.810 [2024-07-15 12:09:52.605938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.810 [2024-07-15 12:09:52.647460] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.810 [2024-07-15 12:09:52.647499] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.810 [2024-07-15 12:09:52.647506] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.810 [2024-07-15 12:09:52.647513] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.810 [2024-07-15 12:09:52.647518] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.810 [2024-07-15 12:09:52.647579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.810 [2024-07-15 12:09:52.647686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.810 [2024-07-15 12:09:52.647793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.810 [2024-07-15 12:09:52.647794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.810 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.810 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:20:02.810 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.810 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.810 12:09:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.810 12:09:52 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.810 12:09:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:03.069 [2024-07-15 12:09:52.932665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.069 12:09:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:03.328 12:09:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:03.328 12:09:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:03.586 12:09:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:03.586 12:09:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:03.586 12:09:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:03.586 12:09:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:03.846 12:09:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:03.846 12:09:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:04.105 12:09:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:04.365 12:09:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:04.365 12:09:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:04.365 12:09:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:04.365 12:09:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:04.625 12:09:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:04.625 12:09:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:04.884 12:09:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:05.143 12:09:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:05.143 12:09:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.143 12:09:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:05.143 12:09:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:05.402 12:09:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.661 [2024-07-15 12:09:55.418883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.661 12:09:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:05.661 12:09:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:05.920 12:09:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:07.297 12:09:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:07.297 12:09:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:20:07.297 12:09:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:07.297 12:09:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:20:07.297 12:09:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:20:07.297 12:09:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:20:09.229 12:09:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:09.229 12:09:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:09.229 12:09:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:09.229 12:09:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:20:09.229 12:09:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:09.229 12:09:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:20:09.229 12:09:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:09.229 [global] 00:20:09.229 thread=1 00:20:09.229 invalidate=1 00:20:09.229 rw=write 00:20:09.229 time_based=1 00:20:09.229 runtime=1 00:20:09.229 ioengine=libaio 00:20:09.229 direct=1 00:20:09.229 bs=4096 00:20:09.229 iodepth=1 00:20:09.229 norandommap=0 00:20:09.229 numjobs=1 00:20:09.229 00:20:09.229 verify_dump=1 00:20:09.229 verify_backlog=512 00:20:09.229 verify_state_save=0 00:20:09.229 do_verify=1 00:20:09.229 verify=crc32c-intel 00:20:09.229 [job0] 00:20:09.229 filename=/dev/nvme0n1 00:20:09.229 [job1] 00:20:09.229 filename=/dev/nvme0n2 00:20:09.229 [job2] 00:20:09.229 filename=/dev/nvme0n3 00:20:09.229 [job3] 00:20:09.229 filename=/dev/nvme0n4 00:20:09.229 Could not set queue depth (nvme0n1) 00:20:09.229 Could not set queue depth (nvme0n2) 00:20:09.229 Could not set queue depth (nvme0n3) 00:20:09.229 Could not set queue depth (nvme0n4) 00:20:09.488 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:09.488 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:09.488 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:09.488 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:09.488 fio-3.35 00:20:09.488 Starting 4 threads 00:20:10.915 00:20:10.915 job0: (groupid=0, jobs=1): err= 0: pid=1146917: Mon Jul 15 12:10:00 2024 00:20:10.915 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:20:10.915 slat (nsec): min=10101, max=25508, avg=20465.14, stdev=3503.93 00:20:10.915 clat (usec): min=40598, max=42894, avg=41176.82, stdev=533.60 00:20:10.915 lat (usec): min=40608, max=42908, avg=41197.28, stdev=532.70 00:20:10.915 clat percentiles (usec): 00:20:10.915 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:20:10.915 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:10.915 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:20:10.915 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:10.915 | 99.99th=[42730] 00:20:10.915 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:20:10.915 slat (nsec): min=10592, max=34016, avg=12210.92, stdev=1889.89 00:20:10.915 clat (usec): min=148, max=341, avg=179.61, stdev=15.40 00:20:10.915 lat (usec): min=159, max=355, avg=191.82, stdev=15.89 00:20:10.915 clat percentiles (usec): 00:20:10.915 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:20:10.915 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:20:10.915 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:20:10.915 | 99.00th=[ 221], 99.50th=[ 265], 99.90th=[ 343], 99.95th=[ 343], 00:20:10.915 | 99.99th=[ 343] 00:20:10.915 bw ( KiB/s): min= 4096, max= 4096, per=22.91%, avg=4096.00, stdev= 0.00, samples=1 00:20:10.915 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:10.915 lat (usec) : 250=95.13%, 500=0.75% 00:20:10.915 lat (msec) : 50=4.12% 00:20:10.915 cpu : usr=0.50%, sys=0.89%, ctx=535, majf=0, minf=1 00:20:10.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.916 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:10.916 job1: (groupid=0, jobs=1): err= 0: pid=1146918: Mon Jul 15 12:10:00 2024 00:20:10.916 read: IOPS=955, BW=3822KiB/s (3914kB/s)(3864KiB/1011msec) 00:20:10.916 slat (usec): min=3, max=198, avg= 8.02, stdev= 6.48 00:20:10.916 clat (usec): min=214, max=42079, avg=831.47, stdev=4718.99 00:20:10.916 lat (usec): min=222, max=42101, avg=839.49, stdev=4720.14 00:20:10.916 clat percentiles (usec): 00:20:10.916 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:20:10.916 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 277], 00:20:10.916 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 392], 95.00th=[ 429], 00:20:10.916 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:20:10.916 | 99.99th=[42206] 00:20:10.916 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:20:10.916 slat (nsec): min=10165, max=37131, avg=11586.58, stdev=1924.50 00:20:10.916 clat (usec): min=137, max=277, avg=176.83, stdev=16.24 00:20:10.916 lat (usec): min=149, max=289, avg=188.42, stdev=16.56 00:20:10.916 clat percentiles (usec): 00:20:10.916 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:20:10.916 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:20:10.916 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 204], 00:20:10.916 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 237], 99.95th=[ 277], 00:20:10.916 | 99.99th=[ 277] 00:20:10.916 bw ( KiB/s): min= 8192, max= 8192, per=45.82%, avg=8192.00, stdev= 0.00, samples=1 00:20:10.916 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:10.916 lat (usec) : 250=68.64%, 500=30.65%, 1000=0.05% 00:20:10.916 lat (msec) : 50=0.65% 00:20:10.916 cpu : usr=2.87%, sys=1.78%, ctx=1990, majf=0, minf=2 00:20:10.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.916 issued rwts: total=966,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:10.916 job2: (groupid=0, jobs=1): err= 0: pid=1146919: Mon Jul 15 12:10:00 2024 00:20:10.916 read: IOPS=1003, BW=4016KiB/s (4112kB/s)(4140KiB/1031msec) 00:20:10.916 slat (nsec): min=6339, max=24426, avg=7331.53, stdev=1556.43 00:20:10.916 clat (usec): min=258, max=41913, avg=692.37, stdev=3995.64 00:20:10.916 lat (usec): min=265, max=41936, avg=699.70, stdev=3996.76 00:20:10.916 clat percentiles (usec): 00:20:10.916 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:20:10.916 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:20:10.916 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 343], 00:20:10.916 | 99.00th=[ 502], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:20:10.916 | 99.99th=[41681] 00:20:10.916 write: IOPS=1489, BW=5959KiB/s (6102kB/s)(6144KiB/1031msec); 0 zone resets 00:20:10.916 slat (nsec): min=4921, max=78194, avg=9878.15, stdev=2265.65 00:20:10.916 clat (usec): min=143, max=4080, avg=186.10, stdev=142.97 00:20:10.916 lat (usec): min=153, max=4091, avg=195.97, stdev=143.81 00:20:10.916 clat percentiles (usec): 00:20:10.916 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:20:10.916 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:20:10.916 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:20:10.916 | 99.00th=[ 223], 99.50th=[ 302], 99.90th=[ 3097], 99.95th=[ 4080], 00:20:10.916 | 99.99th=[ 4080] 00:20:10.916 bw ( KiB/s): min= 4096, max= 8192, per=34.37%, avg=6144.00, stdev=2896.31, samples=2 00:20:10.916 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:20:10.916 lat (usec) : 250=59.32%, 500=40.10%, 750=0.04%, 1000=0.04% 00:20:10.916 lat (msec) : 4=0.08%, 10=0.04%, 50=0.39% 00:20:10.916 cpu : usr=1.36%, sys=2.04%, ctx=2571, majf=0, minf=1 00:20:10.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.916 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:10.916 job3: (groupid=0, jobs=1): err= 0: pid=1146920: Mon Jul 15 12:10:00 2024 00:20:10.916 read: IOPS=1342, BW=5371KiB/s (5500kB/s)(5376KiB/1001msec) 00:20:10.916 slat (nsec): min=6313, max=38055, avg=7696.80, stdev=1997.07 00:20:10.916 clat (usec): min=198, max=41430, avg=493.56, stdev=3136.88 00:20:10.916 lat (usec): min=205, max=41438, avg=501.26, stdev=3137.26 00:20:10.916 clat percentiles (usec): 00:20:10.916 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 225], 00:20:10.916 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:20:10.916 | 70.00th=[ 258], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 318], 00:20:10.916 | 99.00th=[ 392], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:20:10.916 | 99.99th=[41681] 00:20:10.916 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:20:10.916 slat (usec): min=9, max=100, avg=11.43, stdev= 3.11 00:20:10.916 clat (usec): min=136, max=2699, avg=195.46, stdev=106.97 00:20:10.916 lat (usec): min=146, max=2799, avg=206.88, stdev=108.72 00:20:10.916 clat percentiles (usec): 00:20:10.916 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:20:10.916 | 30.00th=[ 161], 40.00th=[ 174], 50.00th=[ 182], 60.00th=[ 192], 00:20:10.916 | 70.00th=[ 206], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 245], 00:20:10.916 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 2573], 99.95th=[ 2704], 00:20:10.916 | 99.99th=[ 2704] 00:20:10.916 bw ( KiB/s): min= 4096, max= 4096, per=22.91%, avg=4096.00, stdev= 0.00, samples=1 00:20:10.916 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:10.916 lat (usec) : 250=81.22%, 500=18.37%, 1000=0.03% 00:20:10.916 lat (msec) : 2=0.03%, 4=0.07%, 50=0.28% 00:20:10.916 cpu : usr=2.20%, sys=3.20%, ctx=2882, majf=0, minf=1 00:20:10.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.916 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.916 issued rwts: total=1344,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:10.916 00:20:10.916 Run status group 0 (all jobs): 00:20:10.916 READ: bw=12.8MiB/s (13.4MB/s), 87.4KiB/s-5371KiB/s (89.5kB/s-5500kB/s), io=13.2MiB (13.8MB), run=1001-1031msec 00:20:10.916 WRITE: bw=17.5MiB/s (18.3MB/s), 2034KiB/s-6138KiB/s (2083kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1031msec 00:20:10.916 00:20:10.916 Disk stats (read/write): 00:20:10.916 nvme0n1: ios=44/512, merge=0/0, ticks=1610/87, in_queue=1697, util=85.37% 00:20:10.916 nvme0n2: ios=1012/1024, merge=0/0, ticks=694/172, in_queue=866, util=90.40% 00:20:10.916 nvme0n3: ios=1086/1536, merge=0/0, ticks=571/285, in_queue=856, util=94.86% 00:20:10.916 nvme0n4: ios=1046/1082, merge=0/0, ticks=1491/225, in_queue=1716, util=94.09% 00:20:10.916 12:10:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:10.916 [global] 00:20:10.916 thread=1 00:20:10.916 invalidate=1 00:20:10.916 rw=randwrite 00:20:10.916 time_based=1 00:20:10.916 runtime=1 00:20:10.916 ioengine=libaio 00:20:10.916 direct=1 00:20:10.916 bs=4096 00:20:10.916 iodepth=1 00:20:10.916 norandommap=0 00:20:10.916 numjobs=1 00:20:10.916 00:20:10.916 verify_dump=1 00:20:10.916 verify_backlog=512 00:20:10.916 verify_state_save=0 00:20:10.916 do_verify=1 00:20:10.916 verify=crc32c-intel 00:20:10.916 [job0] 00:20:10.916 filename=/dev/nvme0n1 00:20:10.916 [job1] 00:20:10.916 filename=/dev/nvme0n2 00:20:10.916 [job2] 00:20:10.916 filename=/dev/nvme0n3 00:20:10.916 [job3] 00:20:10.916 filename=/dev/nvme0n4 00:20:10.916 Could not set queue depth (nvme0n1) 00:20:10.916 Could not set queue depth (nvme0n2) 00:20:10.916 Could not set queue depth (nvme0n3) 00:20:10.916 Could not set queue depth (nvme0n4) 00:20:10.916 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:10.916 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:10.916 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:10.916 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:10.916 fio-3.35 00:20:10.916 Starting 4 threads 00:20:12.287 00:20:12.287 job0: (groupid=0, jobs=1): err= 0: pid=1147291: Mon Jul 15 12:10:02 2024 00:20:12.287 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:20:12.287 slat (nsec): min=9567, max=23560, avg=18447.14, stdev=4164.22 00:20:12.287 clat (usec): min=33258, max=42008, avg=40660.75, stdev=1668.81 00:20:12.287 lat (usec): min=33270, max=42021, avg=40679.20, stdev=1670.19 00:20:12.287 clat percentiles (usec): 00:20:12.287 | 1.00th=[33162], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:20:12.287 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:12.287 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:12.287 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:12.287 | 99.99th=[42206] 00:20:12.287 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:20:12.287 slat (nsec): min=10125, max=43399, avg=12376.11, stdev=2324.33 00:20:12.287 clat (usec): min=138, max=293, avg=192.52, stdev=20.24 00:20:12.287 lat (usec): min=151, max=330, avg=204.90, stdev=20.47 00:20:12.287 clat percentiles (usec): 00:20:12.287 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 178], 00:20:12.287 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:20:12.287 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:20:12.287 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 293], 00:20:12.287 | 99.99th=[ 293] 00:20:12.287 bw ( KiB/s): min= 4096, max= 4096, per=22.12%, avg=4096.00, stdev= 0.00, samples=1 00:20:12.287 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:12.287 lat (usec) : 250=94.76%, 500=1.12% 00:20:12.287 lat (msec) : 50=4.12% 00:20:12.287 cpu : usr=0.30%, sys=0.60%, ctx=535, majf=0, minf=1 00:20:12.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.287 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.287 job1: (groupid=0, jobs=1): err= 0: pid=1147292: Mon Jul 15 12:10:02 2024 00:20:12.287 read: IOPS=1842, BW=7369KiB/s (7545kB/s)(7376KiB/1001msec) 00:20:12.287 slat (nsec): min=7148, max=46693, avg=8323.70, stdev=2002.54 00:20:12.287 clat (usec): min=206, max=42315, avg=311.71, stdev=1382.52 00:20:12.287 lat (usec): min=215, max=42324, avg=320.03, stdev=1382.84 00:20:12.287 clat percentiles (usec): 00:20:12.287 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 239], 00:20:12.287 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:20:12.287 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 322], 00:20:12.287 | 99.00th=[ 469], 99.50th=[ 478], 99.90th=[41157], 99.95th=[42206], 00:20:12.287 | 99.99th=[42206] 00:20:12.287 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:12.287 slat (usec): min=10, max=19859, avg=21.26, stdev=438.58 00:20:12.287 clat (usec): min=135, max=310, avg=173.08, stdev=20.76 00:20:12.287 lat (usec): min=146, max=20134, avg=194.34, stdev=441.34 00:20:12.287 clat percentiles (usec): 00:20:12.287 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:20:12.287 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:20:12.287 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 210], 00:20:12.287 | 99.00th=[ 233], 99.50th=[ 249], 99.90th=[ 277], 99.95th=[ 281], 00:20:12.287 | 99.99th=[ 310] 00:20:12.287 bw ( KiB/s): min=10112, max=10112, per=54.62%, avg=10112.00, stdev= 0.00, samples=1 00:20:12.287 iops : min= 2528, max= 2528, avg=2528.00, stdev= 0.00, samples=1 00:20:12.287 lat (usec) : 250=74.69%, 500=25.21% 00:20:12.287 lat (msec) : 2=0.03%, 20=0.03%, 50=0.05% 00:20:12.287 cpu : usr=3.50%, sys=6.00%, ctx=3894, majf=0, minf=1 00:20:12.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.287 issued rwts: total=1844,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.287 job2: (groupid=0, jobs=1): err= 0: pid=1147293: Mon Jul 15 12:10:02 2024 00:20:12.287 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:20:12.287 slat (nsec): min=7416, max=37323, avg=8612.49, stdev=1817.04 00:20:12.287 clat (usec): min=191, max=41181, avg=436.96, stdev=2643.95 00:20:12.287 lat (usec): min=200, max=41191, avg=445.58, stdev=2644.74 00:20:12.287 clat percentiles (usec): 00:20:12.287 | 1.00th=[ 221], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 245], 00:20:12.287 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 262], 00:20:12.287 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:20:12.287 | 99.00th=[ 330], 99.50th=[ 433], 99.90th=[41157], 99.95th=[41157], 00:20:12.287 | 99.99th=[41157] 00:20:12.287 write: IOPS=1601, BW=6406KiB/s (6559kB/s)(6412KiB/1001msec); 0 zone resets 00:20:12.287 slat (nsec): min=10634, max=48858, avg=11934.82, stdev=2209.60 00:20:12.287 clat (usec): min=140, max=342, avg=178.74, stdev=18.16 00:20:12.287 lat (usec): min=152, max=379, avg=190.68, stdev=18.61 00:20:12.287 clat percentiles (usec): 00:20:12.287 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:20:12.287 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 184], 00:20:12.287 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:20:12.287 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 343], 00:20:12.287 | 99.99th=[ 343] 00:20:12.287 bw ( KiB/s): min= 4096, max= 4096, per=22.12%, avg=4096.00, stdev= 0.00, samples=1 00:20:12.287 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:12.287 lat (usec) : 250=65.34%, 500=34.44% 00:20:12.287 lat (msec) : 50=0.22% 00:20:12.287 cpu : usr=2.80%, sys=4.90%, ctx=3140, majf=0, minf=2 00:20:12.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.287 issued rwts: total=1536,1603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.287 job3: (groupid=0, jobs=1): err= 0: pid=1147294: Mon Jul 15 12:10:02 2024 00:20:12.287 read: IOPS=22, BW=91.1KiB/s (93.3kB/s)(92.0KiB/1010msec) 00:20:12.287 slat (nsec): min=9778, max=26001, avg=21033.61, stdev=3740.44 00:20:12.287 clat (usec): min=309, max=41911, avg=39214.90, stdev=8484.45 00:20:12.287 lat (usec): min=330, max=41933, avg=39235.94, stdev=8484.38 00:20:12.287 clat percentiles (usec): 00:20:12.287 | 1.00th=[ 310], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:20:12.287 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:12.287 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:12.287 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:20:12.287 | 99.99th=[41681] 00:20:12.287 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:20:12.287 slat (nsec): min=10181, max=46308, avg=11496.50, stdev=2486.54 00:20:12.287 clat (usec): min=153, max=329, avg=193.87, stdev=24.49 00:20:12.287 lat (usec): min=164, max=369, avg=205.37, stdev=24.99 00:20:12.287 clat percentiles (usec): 00:20:12.287 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:20:12.287 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 192], 00:20:12.287 | 70.00th=[ 196], 80.00th=[ 206], 90.00th=[ 241], 95.00th=[ 241], 00:20:12.287 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 330], 99.95th=[ 330], 00:20:12.287 | 99.99th=[ 330] 00:20:12.287 bw ( KiB/s): min= 4096, max= 4096, per=22.12%, avg=4096.00, stdev= 0.00, samples=1 00:20:12.287 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:12.287 lat (usec) : 250=94.95%, 500=0.93% 00:20:12.287 lat (msec) : 50=4.11% 00:20:12.287 cpu : usr=0.40%, sys=0.89%, ctx=535, majf=0, minf=1 00:20:12.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:12.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.288 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:12.288 00:20:12.288 Run status group 0 (all jobs): 00:20:12.288 READ: bw=13.2MiB/s (13.9MB/s), 87.8KiB/s-7369KiB/s (89.9kB/s-7545kB/s), io=13.4MiB (14.0MB), run=1001-1010msec 00:20:12.288 WRITE: bw=18.1MiB/s (19.0MB/s), 2028KiB/s-8184KiB/s (2076kB/s-8380kB/s), io=18.3MiB (19.1MB), run=1001-1010msec 00:20:12.288 00:20:12.288 Disk stats (read/write): 00:20:12.288 nvme0n1: ios=42/512, merge=0/0, ticks=1644/92, in_queue=1736, util=90.18% 00:20:12.288 nvme0n2: ios=1560/1772, merge=0/0, ticks=1415/291, in_queue=1706, util=94.42% 00:20:12.288 nvme0n3: ios=1077/1536, merge=0/0, ticks=1120/262, in_queue=1382, util=96.26% 00:20:12.288 nvme0n4: ios=76/512, merge=0/0, ticks=809/95, in_queue=904, util=95.29% 00:20:12.288 12:10:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:12.288 [global] 00:20:12.288 thread=1 00:20:12.288 invalidate=1 00:20:12.288 rw=write 00:20:12.288 time_based=1 00:20:12.288 runtime=1 00:20:12.288 ioengine=libaio 00:20:12.288 direct=1 00:20:12.288 bs=4096 00:20:12.288 iodepth=128 00:20:12.288 norandommap=0 00:20:12.288 numjobs=1 00:20:12.288 00:20:12.288 verify_dump=1 00:20:12.288 verify_backlog=512 00:20:12.288 verify_state_save=0 00:20:12.288 do_verify=1 00:20:12.288 verify=crc32c-intel 00:20:12.288 [job0] 00:20:12.288 filename=/dev/nvme0n1 00:20:12.288 [job1] 00:20:12.288 filename=/dev/nvme0n2 00:20:12.288 [job2] 00:20:12.288 filename=/dev/nvme0n3 00:20:12.288 [job3] 00:20:12.288 filename=/dev/nvme0n4 00:20:12.288 Could not set queue depth (nvme0n1) 00:20:12.288 Could not set queue depth (nvme0n2) 00:20:12.288 Could not set queue depth (nvme0n3) 00:20:12.288 Could not set queue depth (nvme0n4) 00:20:12.545 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:12.545 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:12.545 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:12.545 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:12.545 fio-3.35 00:20:12.545 Starting 4 threads 00:20:13.915 00:20:13.915 job0: (groupid=0, jobs=1): err= 0: pid=1147668: Mon Jul 15 12:10:03 2024 00:20:13.915 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:20:13.915 slat (nsec): min=1515, max=30695k, avg=149245.73, stdev=1142304.63 00:20:13.915 clat (usec): min=3774, max=60535, avg=17341.88, stdev=10222.05 00:20:13.915 lat (usec): min=3862, max=60566, avg=17491.13, stdev=10311.41 00:20:13.915 clat percentiles (usec): 00:20:13.915 | 1.00th=[ 7701], 5.00th=[10683], 10.00th=[10814], 20.00th=[11076], 00:20:13.915 | 30.00th=[11338], 40.00th=[12649], 50.00th=[14091], 60.00th=[14877], 00:20:13.915 | 70.00th=[15664], 80.00th=[20841], 90.00th=[31851], 95.00th=[42206], 00:20:13.915 | 99.00th=[56361], 99.50th=[57410], 99.90th=[59507], 99.95th=[59507], 00:20:13.915 | 99.99th=[60556] 00:20:13.915 write: IOPS=3692, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1004msec); 0 zone resets 00:20:13.915 slat (usec): min=2, max=33609, avg=118.72, stdev=863.63 00:20:13.915 clat (usec): min=2456, max=62797, avg=16116.28, stdev=7717.30 00:20:13.915 lat (usec): min=2467, max=62810, avg=16235.00, stdev=7790.81 00:20:13.915 clat percentiles (usec): 00:20:13.915 | 1.00th=[ 3785], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[ 9896], 00:20:13.915 | 30.00th=[10552], 40.00th=[12518], 50.00th=[14877], 60.00th=[17433], 00:20:13.915 | 70.00th=[19792], 80.00th=[20055], 90.00th=[21890], 95.00th=[30278], 00:20:13.915 | 99.00th=[42206], 99.50th=[57410], 99.90th=[62653], 99.95th=[62653], 00:20:13.915 | 99.99th=[62653] 00:20:13.915 bw ( KiB/s): min=12288, max=16472, per=20.55%, avg=14380.00, stdev=2958.53, samples=2 00:20:13.915 iops : min= 3072, max= 4118, avg=3595.00, stdev=739.63, samples=2 00:20:13.915 lat (msec) : 4=0.77%, 10=12.43%, 20=63.48%, 50=21.70%, 100=1.63% 00:20:13.915 cpu : usr=4.49%, sys=3.19%, ctx=361, majf=0, minf=1 00:20:13.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.915 issued rwts: total=3584,3707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.915 job1: (groupid=0, jobs=1): err= 0: pid=1147670: Mon Jul 15 12:10:03 2024 00:20:13.915 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:20:13.915 slat (nsec): min=1205, max=21597k, avg=162812.98, stdev=1301009.49 00:20:13.915 clat (usec): min=3403, max=71976, avg=20357.36, stdev=11701.71 00:20:13.915 lat (usec): min=3408, max=71984, avg=20520.17, stdev=11827.71 00:20:13.915 clat percentiles (usec): 00:20:13.915 | 1.00th=[ 7111], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10421], 00:20:13.915 | 30.00th=[11863], 40.00th=[13173], 50.00th=[15533], 60.00th=[19792], 00:20:13.915 | 70.00th=[26346], 80.00th=[30540], 90.00th=[35914], 95.00th=[43254], 00:20:13.915 | 99.00th=[60031], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:20:13.915 | 99.99th=[71828] 00:20:13.915 write: IOPS=3588, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1004msec); 0 zone resets 00:20:13.915 slat (nsec): min=1819, max=10243k, avg=107358.50, stdev=541007.23 00:20:13.915 clat (usec): min=670, max=75266, avg=15093.00, stdev=10934.30 00:20:13.915 lat (usec): min=680, max=80702, avg=15200.36, stdev=10998.93 00:20:13.915 clat percentiles (usec): 00:20:13.915 | 1.00th=[ 3163], 5.00th=[ 5342], 10.00th=[ 8094], 20.00th=[ 9896], 00:20:13.915 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[12387], 00:20:13.915 | 70.00th=[17433], 80.00th=[19792], 90.00th=[20317], 95.00th=[38011], 00:20:13.915 | 99.00th=[71828], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:20:13.915 | 99.99th=[74974] 00:20:13.915 bw ( KiB/s): min= 8192, max=20521, per=20.51%, avg=14356.50, stdev=8717.92, samples=2 00:20:13.915 iops : min= 2048, max= 5130, avg=3589.00, stdev=2179.30, samples=2 00:20:13.915 lat (usec) : 750=0.04% 00:20:13.915 lat (msec) : 2=0.11%, 4=0.96%, 10=17.45%, 20=53.01%, 50=26.19% 00:20:13.915 lat (msec) : 100=2.24% 00:20:13.915 cpu : usr=2.99%, sys=3.49%, ctx=425, majf=0, minf=1 00:20:13.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.915 issued rwts: total=3584,3603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.915 job2: (groupid=0, jobs=1): err= 0: pid=1147671: Mon Jul 15 12:10:03 2024 00:20:13.915 read: IOPS=5368, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1004msec) 00:20:13.915 slat (nsec): min=1260, max=12367k, avg=91955.26, stdev=560316.60 00:20:13.915 clat (usec): min=1398, max=30959, avg=11504.20, stdev=2963.25 00:20:13.915 lat (usec): min=4817, max=30981, avg=11596.16, stdev=2994.84 00:20:13.915 clat percentiles (usec): 00:20:13.915 | 1.00th=[ 5145], 5.00th=[ 8094], 10.00th=[ 9241], 20.00th=[ 9896], 00:20:13.915 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:20:13.915 | 70.00th=[11600], 80.00th=[12387], 90.00th=[14091], 95.00th=[16909], 00:20:13.915 | 99.00th=[25822], 99.50th=[25822], 99.90th=[27919], 99.95th=[30278], 00:20:13.915 | 99.99th=[31065] 00:20:13.915 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:20:13.915 slat (usec): min=2, max=21245, avg=84.45, stdev=506.51 00:20:13.915 clat (usec): min=5739, max=36714, avg=11492.11, stdev=3282.41 00:20:13.915 lat (usec): min=5750, max=36747, avg=11576.56, stdev=3313.77 00:20:13.915 clat percentiles (usec): 00:20:13.915 | 1.00th=[ 7046], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10028], 00:20:13.915 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:20:13.915 | 70.00th=[11469], 80.00th=[11600], 90.00th=[12649], 95.00th=[14615], 00:20:13.915 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:20:13.915 | 99.99th=[36963] 00:20:13.915 bw ( KiB/s): min=21346, max=23752, per=32.22%, avg=22549.00, stdev=1701.30, samples=2 00:20:13.915 iops : min= 5336, max= 5938, avg=5637.00, stdev=425.68, samples=2 00:20:13.915 lat (msec) : 2=0.01%, 10=20.46%, 20=77.15%, 50=2.38% 00:20:13.915 cpu : usr=4.29%, sys=6.18%, ctx=602, majf=0, minf=1 00:20:13.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.915 issued rwts: total=5390,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.915 job3: (groupid=0, jobs=1): err= 0: pid=1147672: Mon Jul 15 12:10:03 2024 00:20:13.915 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:20:13.915 slat (nsec): min=1157, max=14989k, avg=108729.32, stdev=774477.94 00:20:13.915 clat (usec): min=1997, max=45201, avg=14797.78, stdev=5630.54 00:20:13.915 lat (usec): min=2006, max=45223, avg=14906.51, stdev=5694.48 00:20:13.915 clat percentiles (usec): 00:20:13.915 | 1.00th=[ 7898], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11469], 00:20:13.915 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12911], 00:20:13.915 | 70.00th=[14746], 80.00th=[18220], 90.00th=[23725], 95.00th=[28181], 00:20:13.915 | 99.00th=[30802], 99.50th=[30802], 99.90th=[36963], 99.95th=[42730], 00:20:13.915 | 99.99th=[45351] 00:20:13.915 write: IOPS=4613, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1002msec); 0 zone resets 00:20:13.915 slat (usec): min=2, max=14790, avg=95.30, stdev=559.95 00:20:13.915 clat (usec): min=771, max=43687, avg=12723.00, stdev=5625.36 00:20:13.915 lat (usec): min=779, max=43695, avg=12818.30, stdev=5675.02 00:20:13.915 clat percentiles (usec): 00:20:13.915 | 1.00th=[ 2442], 5.00th=[ 6849], 10.00th=[ 9372], 20.00th=[10814], 00:20:13.915 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:20:13.915 | 70.00th=[11731], 80.00th=[12911], 90.00th=[19530], 95.00th=[21627], 00:20:13.915 | 99.00th=[40633], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:20:13.915 | 99.99th=[43779] 00:20:13.915 bw ( KiB/s): min=16416, max=20480, per=26.36%, avg=18448.00, stdev=2873.68, samples=2 00:20:13.915 iops : min= 4104, max= 5120, avg=4612.00, stdev=718.42, samples=2 00:20:13.915 lat (usec) : 1000=0.04% 00:20:13.915 lat (msec) : 2=0.44%, 4=0.78%, 10=9.39%, 20=75.35%, 50=13.99% 00:20:13.915 cpu : usr=3.40%, sys=5.00%, ctx=454, majf=0, minf=1 00:20:13.915 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:13.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:13.915 issued rwts: total=4608,4623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.915 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:13.915 00:20:13.915 Run status group 0 (all jobs): 00:20:13.915 READ: bw=66.8MiB/s (70.0MB/s), 13.9MiB/s-21.0MiB/s (14.6MB/s-22.0MB/s), io=67.1MiB (70.3MB), run=1002-1004msec 00:20:13.915 WRITE: bw=68.3MiB/s (71.7MB/s), 14.0MiB/s-21.9MiB/s (14.7MB/s-23.0MB/s), io=68.6MiB (71.9MB), run=1002-1004msec 00:20:13.915 00:20:13.915 Disk stats (read/write): 00:20:13.915 nvme0n1: ios=3117/3151, merge=0/0, ticks=51478/46807, in_queue=98285, util=98.50% 00:20:13.915 nvme0n2: ios=3087/3191, merge=0/0, ticks=37318/23910, in_queue=61228, util=87.09% 00:20:13.915 nvme0n3: ios=4608/4658, merge=0/0, ticks=26514/24985, in_queue=51499, util=88.96% 00:20:13.915 nvme0n4: ios=3600/4083, merge=0/0, ticks=35556/31209, in_queue=66765, util=97.06% 00:20:13.915 12:10:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:13.915 [global] 00:20:13.915 thread=1 00:20:13.915 invalidate=1 00:20:13.915 rw=randwrite 00:20:13.915 time_based=1 00:20:13.915 runtime=1 00:20:13.916 ioengine=libaio 00:20:13.916 direct=1 00:20:13.916 bs=4096 00:20:13.916 iodepth=128 00:20:13.916 norandommap=0 00:20:13.916 numjobs=1 00:20:13.916 00:20:13.916 verify_dump=1 00:20:13.916 verify_backlog=512 00:20:13.916 verify_state_save=0 00:20:13.916 do_verify=1 00:20:13.916 verify=crc32c-intel 00:20:13.916 [job0] 00:20:13.916 filename=/dev/nvme0n1 00:20:13.916 [job1] 00:20:13.916 filename=/dev/nvme0n2 00:20:13.916 [job2] 00:20:13.916 filename=/dev/nvme0n3 00:20:13.916 [job3] 00:20:13.916 filename=/dev/nvme0n4 00:20:13.916 Could not set queue depth (nvme0n1) 00:20:13.916 Could not set queue depth (nvme0n2) 00:20:13.916 Could not set queue depth (nvme0n3) 00:20:13.916 Could not set queue depth (nvme0n4) 00:20:14.173 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.173 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.173 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.173 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.173 fio-3.35 00:20:14.173 Starting 4 threads 00:20:15.547 00:20:15.547 job0: (groupid=0, jobs=1): err= 0: pid=1148040: Mon Jul 15 12:10:05 2024 00:20:15.547 read: IOPS=6583, BW=25.7MiB/s (27.0MB/s)(26.0MiB/1011msec) 00:20:15.547 slat (nsec): min=1030, max=9007.4k, avg=79283.77, stdev=559532.39 00:20:15.547 clat (usec): min=2484, max=27807, avg=9597.03, stdev=2669.74 00:20:15.547 lat (usec): min=2486, max=27830, avg=9676.31, stdev=2713.26 00:20:15.547 clat percentiles (usec): 00:20:15.547 | 1.00th=[ 3490], 5.00th=[ 6325], 10.00th=[ 7504], 20.00th=[ 7832], 00:20:15.547 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9896], 00:20:15.547 | 70.00th=[10421], 80.00th=[11469], 90.00th=[13042], 95.00th=[14746], 00:20:15.547 | 99.00th=[17695], 99.50th=[18482], 99.90th=[24511], 99.95th=[27657], 00:20:15.547 | 99.99th=[27919] 00:20:15.547 write: IOPS=6890, BW=26.9MiB/s (28.2MB/s)(27.2MiB/1011msec); 0 zone resets 00:20:15.547 slat (nsec): min=1701, max=6236.7k, avg=62413.99, stdev=246963.77 00:20:15.547 clat (usec): min=1830, max=27837, avg=9231.41, stdev=3826.66 00:20:15.547 lat (usec): min=1837, max=27858, avg=9293.82, stdev=3849.09 00:20:15.547 clat percentiles (usec): 00:20:15.547 | 1.00th=[ 2474], 5.00th=[ 3982], 10.00th=[ 5669], 20.00th=[ 7308], 00:20:15.547 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8717], 00:20:15.547 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[12125], 95.00th=[17957], 00:20:15.547 | 99.00th=[26870], 99.50th=[26870], 99.90th=[27132], 99.95th=[27395], 00:20:15.547 | 99.99th=[27919] 00:20:15.547 bw ( KiB/s): min=22984, max=31728, per=39.50%, avg=27356.00, stdev=6182.94, samples=2 00:20:15.547 iops : min= 5746, max= 7932, avg=6839.00, stdev=1545.74, samples=2 00:20:15.547 lat (msec) : 2=0.13%, 4=3.11%, 10=64.59%, 20=30.33%, 50=1.84% 00:20:15.547 cpu : usr=4.06%, sys=5.94%, ctx=927, majf=0, minf=1 00:20:15.547 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:15.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:15.547 issued rwts: total=6656,6966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.547 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:15.547 job1: (groupid=0, jobs=1): err= 0: pid=1148041: Mon Jul 15 12:10:05 2024 00:20:15.547 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:20:15.547 slat (nsec): min=1484, max=17151k, avg=115151.38, stdev=908188.63 00:20:15.547 clat (usec): min=2254, max=61878, avg=14395.93, stdev=9157.52 00:20:15.547 lat (usec): min=2260, max=61881, avg=14511.08, stdev=9253.68 00:20:15.547 clat percentiles (usec): 00:20:15.547 | 1.00th=[ 2900], 5.00th=[ 4146], 10.00th=[ 7963], 20.00th=[ 9765], 00:20:15.547 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11338], 60.00th=[11994], 00:20:15.547 | 70.00th=[13829], 80.00th=[21103], 90.00th=[22938], 95.00th=[29492], 00:20:15.547 | 99.00th=[59507], 99.50th=[60556], 99.90th=[62129], 99.95th=[62129], 00:20:15.547 | 99.99th=[62129] 00:20:15.547 write: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(13.5MiB/1011msec); 0 zone resets 00:20:15.547 slat (usec): min=2, max=15302, avg=161.04, stdev=946.08 00:20:15.547 clat (usec): min=1736, max=128184, avg=24318.04, stdev=23203.63 00:20:15.547 lat (usec): min=1742, max=128200, avg=24479.09, stdev=23355.52 00:20:15.547 clat percentiles (msec): 00:20:15.547 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:20:15.548 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 16], 60.00th=[ 17], 00:20:15.548 | 70.00th=[ 22], 80.00th=[ 43], 90.00th=[ 59], 95.00th=[ 69], 00:20:15.548 | 99.00th=[ 113], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 129], 00:20:15.548 | 99.99th=[ 129] 00:20:15.548 bw ( KiB/s): min= 9992, max=16624, per=19.22%, avg=13308.00, stdev=4689.53, samples=2 00:20:15.548 iops : min= 2498, max= 4156, avg=3327.00, stdev=1172.38, samples=2 00:20:15.548 lat (msec) : 2=0.29%, 4=1.82%, 10=26.94%, 20=40.90%, 50=20.73% 00:20:15.548 lat (msec) : 100=8.47%, 250=0.84% 00:20:15.548 cpu : usr=2.77%, sys=3.96%, ctx=329, majf=0, minf=1 00:20:15.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:20:15.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:15.548 issued rwts: total=3072,3454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:15.548 job2: (groupid=0, jobs=1): err= 0: pid=1148042: Mon Jul 15 12:10:05 2024 00:20:15.548 read: IOPS=3374, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1009msec) 00:20:15.548 slat (nsec): min=1082, max=13350k, avg=103424.86, stdev=791577.44 00:20:15.548 clat (usec): min=2837, max=66162, avg=13563.49, stdev=6519.52 00:20:15.548 lat (usec): min=2841, max=66169, avg=13666.91, stdev=6611.97 00:20:15.548 clat percentiles (usec): 00:20:15.548 | 1.00th=[ 4113], 5.00th=[ 6915], 10.00th=[ 9765], 20.00th=[10945], 00:20:15.548 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:20:15.548 | 70.00th=[13435], 80.00th=[15926], 90.00th=[18744], 95.00th=[21890], 00:20:15.548 | 99.00th=[45876], 99.50th=[56886], 99.90th=[66323], 99.95th=[66323], 00:20:15.548 | 99.99th=[66323] 00:20:15.548 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:20:15.548 slat (usec): min=2, max=12698, avg=147.42, stdev=917.84 00:20:15.548 clat (usec): min=881, max=90859, avg=22811.24, stdev=20013.48 00:20:15.548 lat (usec): min=894, max=90866, avg=22958.66, stdev=20132.00 00:20:15.548 clat percentiles (usec): 00:20:15.548 | 1.00th=[ 4883], 5.00th=[ 6390], 10.00th=[ 7504], 20.00th=[10814], 00:20:15.548 | 30.00th=[11207], 40.00th=[11731], 50.00th=[14877], 60.00th=[16909], 00:20:15.548 | 70.00th=[21103], 80.00th=[28705], 90.00th=[56361], 95.00th=[74974], 00:20:15.548 | 99.00th=[87557], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:20:15.548 | 99.99th=[90702] 00:20:15.548 bw ( KiB/s): min=10648, max=18024, per=20.70%, avg=14336.00, stdev=5215.62, samples=2 00:20:15.548 iops : min= 2662, max= 4506, avg=3584.00, stdev=1303.90, samples=2 00:20:15.548 lat (usec) : 1000=0.04% 00:20:15.548 lat (msec) : 4=0.47%, 10=12.75%, 20=63.96%, 50=16.11%, 100=6.67% 00:20:15.548 cpu : usr=2.18%, sys=4.56%, ctx=307, majf=0, minf=1 00:20:15.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:15.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:15.548 issued rwts: total=3405,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:15.548 job3: (groupid=0, jobs=1): err= 0: pid=1148043: Mon Jul 15 12:10:05 2024 00:20:15.548 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:20:15.548 slat (nsec): min=1265, max=10701k, avg=94501.73, stdev=673853.66 00:20:15.548 clat (usec): min=3940, max=33005, avg=12288.05, stdev=3657.02 00:20:15.548 lat (usec): min=3951, max=34493, avg=12382.55, stdev=3708.67 00:20:15.548 clat percentiles (usec): 00:20:15.548 | 1.00th=[ 5538], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10290], 00:20:15.548 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:20:15.548 | 70.00th=[12125], 80.00th=[13566], 90.00th=[16909], 95.00th=[19268], 00:20:15.548 | 99.00th=[28181], 99.50th=[31327], 99.90th=[32900], 99.95th=[32900], 00:20:15.548 | 99.99th=[32900] 00:20:15.548 write: IOPS=3476, BW=13.6MiB/s (14.2MB/s)(13.7MiB/1012msec); 0 zone resets 00:20:15.548 slat (usec): min=2, max=16784, avg=181.43, stdev=1006.76 00:20:15.548 clat (usec): min=1650, max=140695, avg=25780.52, stdev=28932.48 00:20:15.548 lat (usec): min=1662, max=140705, avg=25961.95, stdev=29120.28 00:20:15.548 clat percentiles (msec): 00:20:15.548 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 11], 00:20:15.548 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:20:15.548 | 70.00th=[ 15], 80.00th=[ 43], 90.00th=[ 77], 95.00th=[ 89], 00:20:15.548 | 99.00th=[ 129], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 142], 00:20:15.548 | 99.99th=[ 142] 00:20:15.548 bw ( KiB/s): min= 6768, max=20352, per=19.58%, avg=13560.00, stdev=9605.34, samples=2 00:20:15.548 iops : min= 1692, max= 5088, avg=3390.00, stdev=2401.33, samples=2 00:20:15.548 lat (msec) : 2=0.14%, 4=0.68%, 10=14.29%, 20=67.91%, 50=6.68% 00:20:15.548 lat (msec) : 100=8.54%, 250=1.76% 00:20:15.548 cpu : usr=2.67%, sys=4.35%, ctx=431, majf=0, minf=1 00:20:15.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:20:15.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:15.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:15.548 issued rwts: total=3072,3518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:15.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:15.548 00:20:15.548 Run status group 0 (all jobs): 00:20:15.548 READ: bw=62.5MiB/s (65.6MB/s), 11.9MiB/s-25.7MiB/s (12.4MB/s-27.0MB/s), io=63.3MiB (66.4MB), run=1009-1012msec 00:20:15.548 WRITE: bw=67.6MiB/s (70.9MB/s), 13.3MiB/s-26.9MiB/s (14.0MB/s-28.2MB/s), io=68.4MiB (71.8MB), run=1009-1012msec 00:20:15.548 00:20:15.548 Disk stats (read/write): 00:20:15.548 nvme0n1: ios=5807/6144, merge=0/0, ticks=50636/50513, in_queue=101149, util=86.97% 00:20:15.548 nvme0n2: ios=2584/2815, merge=0/0, ticks=35276/70777, in_queue=106053, util=98.48% 00:20:15.548 nvme0n3: ios=2591/2926, merge=0/0, ticks=33906/71642, in_queue=105548, util=97.40% 00:20:15.548 nvme0n4: ios=2320/2560, merge=0/0, ticks=27437/78179, in_queue=105616, util=90.79% 00:20:15.548 12:10:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:20:15.548 12:10:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1148277 00:20:15.548 12:10:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:15.548 12:10:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:20:15.548 [global] 00:20:15.548 thread=1 00:20:15.548 invalidate=1 00:20:15.548 rw=read 00:20:15.548 time_based=1 00:20:15.548 runtime=10 00:20:15.548 ioengine=libaio 00:20:15.548 direct=1 00:20:15.548 bs=4096 00:20:15.548 iodepth=1 00:20:15.549 norandommap=1 00:20:15.549 numjobs=1 00:20:15.549 00:20:15.549 [job0] 00:20:15.549 filename=/dev/nvme0n1 00:20:15.549 [job1] 00:20:15.549 filename=/dev/nvme0n2 00:20:15.549 [job2] 00:20:15.549 filename=/dev/nvme0n3 00:20:15.549 [job3] 00:20:15.549 filename=/dev/nvme0n4 00:20:15.549 Could not set queue depth (nvme0n1) 00:20:15.549 Could not set queue depth (nvme0n2) 00:20:15.549 Could not set queue depth (nvme0n3) 00:20:15.549 Could not set queue depth (nvme0n4) 00:20:15.549 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:15.549 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:15.549 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:15.549 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:15.549 fio-3.35 00:20:15.549 Starting 4 threads 00:20:18.824 12:10:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:18.824 12:10:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:18.824 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1945600, buflen=4096 00:20:18.824 fio: pid=1148420, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:18.824 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=34267136, buflen=4096 00:20:18.824 fio: pid=1148419, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:18.824 12:10:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:18.824 12:10:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:18.824 12:10:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:18.824 12:10:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:18.824 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=311296, buflen=4096 00:20:18.824 fio: pid=1148417, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:19.080 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5677056, buflen=4096 00:20:19.080 fio: pid=1148418, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:19.080 12:10:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:19.080 12:10:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:19.080 00:20:19.080 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1148417: Mon Jul 15 12:10:08 2024 00:20:19.080 read: IOPS=24, BW=97.9KiB/s (100kB/s)(304KiB/3105msec) 00:20:19.080 slat (usec): min=9, max=2737, avg=57.36, stdev=309.45 00:20:19.080 clat (usec): min=474, max=43252, avg=40507.99, stdev=4668.86 00:20:19.080 lat (usec): min=502, max=44000, avg=40565.75, stdev=4684.33 00:20:19.080 clat percentiles (usec): 00:20:19.080 | 1.00th=[ 474], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:20:19.080 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:20:19.080 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:20:19.080 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:19.080 | 99.99th=[43254] 00:20:19.080 bw ( KiB/s): min= 96, max= 104, per=0.77%, avg=97.60, stdev= 3.58, samples=5 00:20:19.080 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:20:19.080 lat (usec) : 500=1.30% 00:20:19.080 lat (msec) : 50=97.40% 00:20:19.080 cpu : usr=0.13%, sys=0.00%, ctx=79, majf=0, minf=1 00:20:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.080 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.080 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:19.080 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1148418: Mon Jul 15 12:10:08 2024 00:20:19.080 read: IOPS=425, BW=1700KiB/s (1741kB/s)(5544KiB/3261msec) 00:20:19.080 slat (usec): min=5, max=29522, avg=34.50, stdev=819.14 00:20:19.080 clat (usec): min=202, max=44142, avg=2309.87, stdev=8955.66 00:20:19.080 lat (usec): min=209, max=49007, avg=2344.39, stdev=9014.03 00:20:19.080 clat percentiles (usec): 00:20:19.080 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 221], 00:20:19.080 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 251], 00:20:19.080 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[40633], 00:20:19.080 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[44303], 00:20:19.080 | 99.99th=[44303] 00:20:19.080 bw ( KiB/s): min= 96, max= 7008, per=9.88%, avg=1248.50, stdev=2821.57, samples=6 00:20:19.080 iops : min= 24, max= 1752, avg=312.00, stdev=705.45, samples=6 00:20:19.080 lat (usec) : 250=59.12%, 500=35.76% 00:20:19.080 lat (msec) : 50=5.05% 00:20:19.080 cpu : usr=0.18%, sys=0.43%, ctx=1389, majf=0, minf=1 00:20:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.080 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.080 issued rwts: total=1387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:19.080 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1148419: Mon Jul 15 12:10:08 2024 00:20:19.080 read: IOPS=2896, BW=11.3MiB/s (11.9MB/s)(32.7MiB/2889msec) 00:20:19.080 slat (nsec): min=5986, max=33271, avg=6952.79, stdev=1152.66 00:20:19.080 clat (usec): min=206, max=41953, avg=334.71, stdev=1842.47 00:20:19.080 lat (usec): min=212, max=41975, avg=341.66, stdev=1842.92 00:20:19.080 clat percentiles (usec): 00:20:19.080 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 241], 00:20:19.080 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:20:19.080 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:20:19.080 | 99.00th=[ 400], 99.50th=[ 429], 99.90th=[41157], 99.95th=[41157], 00:20:19.080 | 99.99th=[42206] 00:20:19.080 bw ( KiB/s): min= 4480, max=15496, per=88.20%, avg=11147.20, stdev=5873.36, samples=5 00:20:19.080 iops : min= 1120, max= 3874, avg=2786.80, stdev=1468.34, samples=5 00:20:19.080 lat (usec) : 250=51.79%, 500=47.99%, 750=0.01% 00:20:19.080 lat (msec) : 50=0.20% 00:20:19.080 cpu : usr=0.80%, sys=2.42%, ctx=8367, majf=0, minf=1 00:20:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.080 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.080 issued rwts: total=8367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:19.080 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1148420: Mon Jul 15 12:10:08 2024 00:20:19.080 read: IOPS=175, BW=700KiB/s (717kB/s)(1900KiB/2713msec) 00:20:19.080 slat (nsec): min=6331, max=30686, avg=9133.70, stdev=5339.45 00:20:19.080 clat (usec): min=204, max=41931, avg=5655.58, stdev=13835.97 00:20:19.080 lat (usec): min=212, max=41956, avg=5664.68, stdev=13840.75 00:20:19.080 clat percentiles (usec): 00:20:19.080 | 1.00th=[ 215], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 241], 00:20:19.080 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 255], 00:20:19.080 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[41157], 95.00th=[41157], 00:20:19.080 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:20:19.080 | 99.99th=[41681] 00:20:19.080 bw ( KiB/s): min= 96, max= 2880, per=5.95%, avg=752.00, stdev=1208.01, samples=5 00:20:19.080 iops : min= 24, max= 720, avg=188.00, stdev=302.00, samples=5 00:20:19.080 lat (usec) : 250=43.49%, 500=42.86%, 750=0.21% 00:20:19.080 lat (msec) : 50=13.24% 00:20:19.080 cpu : usr=0.04%, sys=0.26%, ctx=476, majf=0, minf=2 00:20:19.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.080 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.080 issued rwts: total=476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:19.080 00:20:19.080 Run status group 0 (all jobs): 00:20:19.080 READ: bw=12.3MiB/s (12.9MB/s), 97.9KiB/s-11.3MiB/s (100kB/s-11.9MB/s), io=40.2MiB (42.2MB), run=2713-3261msec 00:20:19.080 00:20:19.080 Disk stats (read/write): 00:20:19.080 nvme0n1: ios=109/0, merge=0/0, ticks=3836/0, in_queue=3836, util=99.30% 00:20:19.080 nvme0n2: ios=939/0, merge=0/0, ticks=3070/0, in_queue=3070, util=95.39% 00:20:19.080 nvme0n3: ios=8317/0, merge=0/0, ticks=2734/0, in_queue=2734, util=96.49% 00:20:19.080 nvme0n4: ios=472/0, merge=0/0, ticks=2562/0, in_queue=2562, util=96.41% 00:20:19.336 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:19.336 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:19.593 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:19.593 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:19.593 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:19.593 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:19.849 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:19.850 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:20.106 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:20:20.106 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1148277 00:20:20.106 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:20:20.106 12:10:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:20.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:20.106 nvmf hotplug test: fio failed as expected 00:20:20.106 12:10:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.363 rmmod nvme_tcp 00:20:20.363 rmmod nvme_fabrics 00:20:20.363 rmmod nvme_keyring 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1145578 ']' 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1145578 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1145578 ']' 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1145578 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.363 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1145578 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1145578' 00:20:20.622 killing process with pid 1145578 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1145578 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1145578 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.622 12:10:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.155 12:10:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:23.155 00:20:23.155 real 0m26.033s 00:20:23.155 user 1m43.885s 00:20:23.155 sys 0m7.850s 00:20:23.155 12:10:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:23.155 12:10:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.155 ************************************ 00:20:23.155 END TEST nvmf_fio_target 00:20:23.155 ************************************ 00:20:23.155 12:10:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:23.155 12:10:12 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:23.155 12:10:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:23.155 12:10:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.155 12:10:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.155 ************************************ 00:20:23.155 START TEST nvmf_bdevio 00:20:23.155 ************************************ 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:23.155 * Looking for test storage... 00:20:23.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.155 12:10:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:28.430 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:28.430 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:28.430 Found net devices under 0000:86:00.0: cvl_0_0 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.430 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:28.431 Found net devices under 0000:86:00.1: cvl_0_1 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.431 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:20:28.690 00:20:28.690 --- 10.0.0.2 ping statistics --- 00:20:28.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.690 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:20:28.690 00:20:28.690 --- 10.0.0.1 ping statistics --- 00:20:28.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.690 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1152652 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1152652 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1152652 ']' 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.690 12:10:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:28.690 [2024-07-15 12:10:18.633273] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:20:28.690 [2024-07-15 12:10:18.633316] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.690 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.949 [2024-07-15 12:10:18.703531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.949 [2024-07-15 12:10:18.745337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.949 [2024-07-15 12:10:18.745375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.949 [2024-07-15 12:10:18.745382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.949 [2024-07-15 12:10:18.745388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.949 [2024-07-15 12:10:18.745393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.949 [2024-07-15 12:10:18.745512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:28.949 [2024-07-15 12:10:18.745621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:28.949 [2024-07-15 12:10:18.745730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.949 [2024-07-15 12:10:18.745731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:29.517 [2024-07-15 12:10:19.483115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:29.517 Malloc0 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.517 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:29.776 [2024-07-15 12:10:19.534766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:29.776 { 00:20:29.776 "params": { 00:20:29.776 "name": "Nvme$subsystem", 00:20:29.776 "trtype": "$TEST_TRANSPORT", 00:20:29.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.776 "adrfam": "ipv4", 00:20:29.776 "trsvcid": "$NVMF_PORT", 00:20:29.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.776 "hdgst": ${hdgst:-false}, 00:20:29.776 "ddgst": ${ddgst:-false} 00:20:29.776 }, 00:20:29.776 "method": "bdev_nvme_attach_controller" 00:20:29.776 } 00:20:29.776 EOF 00:20:29.776 )") 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:29.776 12:10:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:29.777 "params": { 00:20:29.777 "name": "Nvme1", 00:20:29.777 "trtype": "tcp", 00:20:29.777 "traddr": "10.0.0.2", 00:20:29.777 "adrfam": "ipv4", 00:20:29.777 "trsvcid": "4420", 00:20:29.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.777 "hdgst": false, 00:20:29.777 "ddgst": false 00:20:29.777 }, 00:20:29.777 "method": "bdev_nvme_attach_controller" 00:20:29.777 }' 00:20:29.777 [2024-07-15 12:10:19.584845] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:20:29.777 [2024-07-15 12:10:19.584888] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152901 ] 00:20:29.777 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.777 [2024-07-15 12:10:19.652928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:29.777 [2024-07-15 12:10:19.694931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.777 [2024-07-15 12:10:19.695051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.777 [2024-07-15 12:10:19.695051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.036 I/O targets: 00:20:30.036 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:30.036 00:20:30.036 00:20:30.036 CUnit - A unit testing framework for C - Version 2.1-3 00:20:30.036 http://cunit.sourceforge.net/ 00:20:30.036 00:20:30.036 00:20:30.036 Suite: bdevio tests on: Nvme1n1 00:20:30.295 Test: blockdev write read block ...passed 00:20:30.295 Test: blockdev write zeroes read block ...passed 00:20:30.295 Test: blockdev write zeroes read no split ...passed 00:20:30.295 Test: blockdev write zeroes read split ...passed 00:20:30.295 Test: blockdev write zeroes read split partial ...passed 00:20:30.295 Test: blockdev reset ...[2024-07-15 12:10:20.205732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:30.295 [2024-07-15 12:10:20.205796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45070 (9): Bad file descriptor 00:20:30.295 [2024-07-15 12:10:20.263017] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:30.295 passed 00:20:30.601 Test: blockdev write read 8 blocks ...passed 00:20:30.601 Test: blockdev write read size > 128k ...passed 00:20:30.601 Test: blockdev write read invalid size ...passed 00:20:30.601 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:30.601 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:30.601 Test: blockdev write read max offset ...passed 00:20:30.601 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:30.601 Test: blockdev writev readv 8 blocks ...passed 00:20:30.601 Test: blockdev writev readv 30 x 1block ...passed 00:20:30.601 Test: blockdev writev readv block ...passed 00:20:30.601 Test: blockdev writev readv size > 128k ...passed 00:20:30.601 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:30.601 Test: blockdev comparev and writev ...[2024-07-15 12:10:20.514503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.601 [2024-07-15 12:10:20.514532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.514545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.601 [2024-07-15 12:10:20.514554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.514822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.601 [2024-07-15 12:10:20.514833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.514845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.601 [2024-07-15 12:10:20.514851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.515109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.601 [2024-07-15 12:10:20.515119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.515132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.601 [2024-07-15 12:10:20.515138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.515406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.601 [2024-07-15 12:10:20.515416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.515428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.601 [2024-07-15 12:10:20.515435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:30.601 passed 00:20:30.601 Test: blockdev nvme passthru rw ...passed 00:20:30.601 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:10:20.597608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:30.601 [2024-07-15 12:10:20.597625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.597754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:30.601 [2024-07-15 12:10:20.597765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.597892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:30.601 [2024-07-15 12:10:20.597906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:30.601 [2024-07-15 12:10:20.598027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:30.601 [2024-07-15 12:10:20.598036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:30.601 passed 00:20:30.860 Test: blockdev nvme admin passthru ...passed 00:20:30.860 Test: blockdev copy ...passed 00:20:30.860 00:20:30.860 Run Summary: Type Total Ran Passed Failed Inactive 00:20:30.860 suites 1 1 n/a 0 0 00:20:30.860 tests 23 23 23 0 0 00:20:30.860 asserts 152 152 152 0 n/a 00:20:30.860 00:20:30.860 Elapsed time = 1.313 seconds 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:30.860 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:30.860 rmmod nvme_tcp 00:20:30.860 rmmod nvme_fabrics 00:20:30.860 rmmod nvme_keyring 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1152652 ']' 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1152652 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1152652 ']' 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1152652 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1152652 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1152652' 00:20:31.120 killing process with pid 1152652 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1152652 00:20:31.120 12:10:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1152652 00:20:31.120 12:10:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:31.120 12:10:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:31.120 12:10:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:31.120 12:10:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.120 12:10:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:31.120 12:10:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.120 12:10:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.120 12:10:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.655 12:10:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:33.655 00:20:33.655 real 0m10.495s 00:20:33.655 user 0m13.692s 00:20:33.655 sys 0m4.799s 00:20:33.655 12:10:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:33.655 12:10:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.655 ************************************ 00:20:33.655 END TEST nvmf_bdevio 00:20:33.655 ************************************ 00:20:33.655 12:10:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:33.655 12:10:23 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:33.655 12:10:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:33.655 12:10:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.655 12:10:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:33.655 ************************************ 00:20:33.655 START TEST nvmf_auth_target 00:20:33.655 ************************************ 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:33.655 * Looking for test storage... 00:20:33.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:33.655 12:10:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:39.030 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:39.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:39.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:39.031 Found net devices under 0000:86:00.0: cvl_0_0 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:39.031 Found net devices under 0000:86:00.1: cvl_0_1 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:39.031 12:10:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:39.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:20:39.291 00:20:39.291 --- 10.0.0.2 ping statistics --- 00:20:39.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.291 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:20:39.291 00:20:39.291 --- 10.0.0.1 ping statistics --- 00:20:39.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.291 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1156623 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1156623 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1156623 ']' 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.291 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1156666 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=03df3f43d5f1076b99779fead9668aa8e535e5a53f7d8573 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ulj 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 03df3f43d5f1076b99779fead9668aa8e535e5a53f7d8573 0 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 03df3f43d5f1076b99779fead9668aa8e535e5a53f7d8573 0 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=03df3f43d5f1076b99779fead9668aa8e535e5a53f7d8573 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ulj 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ulj 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Ulj 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=94961560f9a5aa6c1cf5a7c4b8445055a708354483b43be9985ee6489953ea4d 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PLL 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 94961560f9a5aa6c1cf5a7c4b8445055a708354483b43be9985ee6489953ea4d 3 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 94961560f9a5aa6c1cf5a7c4b8445055a708354483b43be9985ee6489953ea4d 3 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=94961560f9a5aa6c1cf5a7c4b8445055a708354483b43be9985ee6489953ea4d 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:39.551 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PLL 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PLL 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.PLL 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e5e1496f13c383f5ddd8de0df3b83fb7 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.omC 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e5e1496f13c383f5ddd8de0df3b83fb7 1 00:20:39.811 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e5e1496f13c383f5ddd8de0df3b83fb7 1 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e5e1496f13c383f5ddd8de0df3b83fb7 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.omC 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.omC 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.omC 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff39d5c6754084f5e418eb71321b3b36c74c163af9ba03c8 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.SlD 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff39d5c6754084f5e418eb71321b3b36c74c163af9ba03c8 2 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff39d5c6754084f5e418eb71321b3b36c74c163af9ba03c8 2 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff39d5c6754084f5e418eb71321b3b36c74c163af9ba03c8 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.SlD 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.SlD 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.SlD 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9ba036794e980466b67b9ce628a2f682e89b58f083ed3393 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.jGf 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9ba036794e980466b67b9ce628a2f682e89b58f083ed3393 2 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9ba036794e980466b67b9ce628a2f682e89b58f083ed3393 2 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9ba036794e980466b67b9ce628a2f682e89b58f083ed3393 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.jGf 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.jGf 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.jGf 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f19bddde4014d176e881359eb41ba71c 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mNk 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f19bddde4014d176e881359eb41ba71c 1 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f19bddde4014d176e881359eb41ba71c 1 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f19bddde4014d176e881359eb41ba71c 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mNk 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mNk 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.mNk 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:39.812 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=353bcb31a380e89b5bcbd15c0d8cfa491859dce56cbd8be3e7e5fe16e279053f 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SvK 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 353bcb31a380e89b5bcbd15c0d8cfa491859dce56cbd8be3e7e5fe16e279053f 3 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 353bcb31a380e89b5bcbd15c0d8cfa491859dce56cbd8be3e7e5fe16e279053f 3 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=353bcb31a380e89b5bcbd15c0d8cfa491859dce56cbd8be3e7e5fe16e279053f 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SvK 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SvK 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.SvK 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1156623 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1156623 ']' 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.071 12:10:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.071 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.071 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:40.071 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1156666 /var/tmp/host.sock 00:20:40.071 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1156666 ']' 00:20:40.071 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:20:40.072 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.072 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:40.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:40.072 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.072 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ulj 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.330 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Ulj 00:20:40.331 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Ulj 00:20:40.590 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.PLL ]] 00:20:40.590 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PLL 00:20:40.590 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.590 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.590 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.590 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PLL 00:20:40.590 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PLL 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.omC 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.omC 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.omC 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.SlD ]] 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SlD 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SlD 00:20:40.849 12:10:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SlD 00:20:41.108 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:41.108 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.jGf 00:20:41.108 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.108 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.108 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.108 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.jGf 00:20:41.108 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.jGf 00:20:41.366 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.mNk ]] 00:20:41.366 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mNk 00:20:41.366 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.366 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.366 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.366 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mNk 00:20:41.366 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mNk 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SvK 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SvK 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SvK 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:41.634 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.893 12:10:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.152 00:20:42.152 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.152 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.152 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.411 { 00:20:42.411 "cntlid": 1, 00:20:42.411 "qid": 0, 00:20:42.411 "state": "enabled", 00:20:42.411 "thread": "nvmf_tgt_poll_group_000", 00:20:42.411 "listen_address": { 00:20:42.411 "trtype": "TCP", 00:20:42.411 "adrfam": "IPv4", 00:20:42.411 "traddr": "10.0.0.2", 00:20:42.411 "trsvcid": "4420" 00:20:42.411 }, 00:20:42.411 "peer_address": { 00:20:42.411 "trtype": "TCP", 00:20:42.411 "adrfam": "IPv4", 00:20:42.411 "traddr": "10.0.0.1", 00:20:42.411 "trsvcid": "40974" 00:20:42.411 }, 00:20:42.411 "auth": { 00:20:42.411 "state": "completed", 00:20:42.411 "digest": "sha256", 00:20:42.411 "dhgroup": "null" 00:20:42.411 } 00:20:42.411 } 00:20:42.411 ]' 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.411 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.670 12:10:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:20:43.238 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.238 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:43.238 12:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.238 12:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.238 12:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.238 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.238 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:43.238 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.497 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.497 00:20:43.755 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.755 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.755 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.755 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.755 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.755 12:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.755 12:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.755 12:10:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.755 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.755 { 00:20:43.755 "cntlid": 3, 00:20:43.755 "qid": 0, 00:20:43.755 "state": "enabled", 00:20:43.755 "thread": "nvmf_tgt_poll_group_000", 00:20:43.755 "listen_address": { 00:20:43.755 "trtype": "TCP", 00:20:43.756 "adrfam": "IPv4", 00:20:43.756 "traddr": "10.0.0.2", 00:20:43.756 "trsvcid": "4420" 00:20:43.756 }, 00:20:43.756 "peer_address": { 00:20:43.756 "trtype": "TCP", 00:20:43.756 "adrfam": "IPv4", 00:20:43.756 "traddr": "10.0.0.1", 00:20:43.756 "trsvcid": "41000" 00:20:43.756 }, 00:20:43.756 "auth": { 00:20:43.756 "state": "completed", 00:20:43.756 "digest": "sha256", 00:20:43.756 "dhgroup": "null" 00:20:43.756 } 00:20:43.756 } 00:20:43.756 ]' 00:20:43.756 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.756 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.756 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.014 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:44.014 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.014 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.014 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.014 12:10:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.014 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:20:44.581 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.581 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:44.581 12:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.581 12:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.581 12:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.581 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.581 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:44.581 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.840 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.098 00:20:45.098 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.098 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.098 12:10:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.357 { 00:20:45.357 "cntlid": 5, 00:20:45.357 "qid": 0, 00:20:45.357 "state": "enabled", 00:20:45.357 "thread": "nvmf_tgt_poll_group_000", 00:20:45.357 "listen_address": { 00:20:45.357 "trtype": "TCP", 00:20:45.357 "adrfam": "IPv4", 00:20:45.357 "traddr": "10.0.0.2", 00:20:45.357 "trsvcid": "4420" 00:20:45.357 }, 00:20:45.357 "peer_address": { 00:20:45.357 "trtype": "TCP", 00:20:45.357 "adrfam": "IPv4", 00:20:45.357 "traddr": "10.0.0.1", 00:20:45.357 "trsvcid": "41028" 00:20:45.357 }, 00:20:45.357 "auth": { 00:20:45.357 "state": "completed", 00:20:45.357 "digest": "sha256", 00:20:45.357 "dhgroup": "null" 00:20:45.357 } 00:20:45.357 } 00:20:45.357 ]' 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.357 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.616 12:10:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:20:46.183 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.183 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:46.183 12:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.183 12:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.183 12:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.183 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.183 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:46.183 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.442 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:46.701 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.701 { 00:20:46.701 "cntlid": 7, 00:20:46.701 "qid": 0, 00:20:46.701 "state": "enabled", 00:20:46.701 "thread": "nvmf_tgt_poll_group_000", 00:20:46.701 "listen_address": { 00:20:46.701 "trtype": "TCP", 00:20:46.701 "adrfam": "IPv4", 00:20:46.701 "traddr": "10.0.0.2", 00:20:46.701 "trsvcid": "4420" 00:20:46.701 }, 00:20:46.701 "peer_address": { 00:20:46.701 "trtype": "TCP", 00:20:46.701 "adrfam": "IPv4", 00:20:46.701 "traddr": "10.0.0.1", 00:20:46.701 "trsvcid": "41042" 00:20:46.701 }, 00:20:46.701 "auth": { 00:20:46.701 "state": "completed", 00:20:46.701 "digest": "sha256", 00:20:46.701 "dhgroup": "null" 00:20:46.701 } 00:20:46.701 } 00:20:46.701 ]' 00:20:46.701 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.958 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.958 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.958 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:46.958 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.958 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.958 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.958 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.217 12:10:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.782 12:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.040 12:10:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.040 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.040 12:10:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.040 00:20:48.040 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.040 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.040 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.296 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.296 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.296 12:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.296 12:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.296 12:10:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.296 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.296 { 00:20:48.296 "cntlid": 9, 00:20:48.296 "qid": 0, 00:20:48.296 "state": "enabled", 00:20:48.296 "thread": "nvmf_tgt_poll_group_000", 00:20:48.296 "listen_address": { 00:20:48.296 "trtype": "TCP", 00:20:48.296 "adrfam": "IPv4", 00:20:48.296 "traddr": "10.0.0.2", 00:20:48.296 "trsvcid": "4420" 00:20:48.296 }, 00:20:48.296 "peer_address": { 00:20:48.296 "trtype": "TCP", 00:20:48.296 "adrfam": "IPv4", 00:20:48.296 "traddr": "10.0.0.1", 00:20:48.296 "trsvcid": "41990" 00:20:48.296 }, 00:20:48.296 "auth": { 00:20:48.296 "state": "completed", 00:20:48.296 "digest": "sha256", 00:20:48.296 "dhgroup": "ffdhe2048" 00:20:48.296 } 00:20:48.296 } 00:20:48.296 ]' 00:20:48.296 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.296 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:48.296 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.554 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.554 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.554 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.554 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.554 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.554 12:10:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:20:49.120 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.120 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:49.120 12:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.120 12:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.120 12:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.120 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.120 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:49.120 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.379 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.638 00:20:49.638 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.638 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.638 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.896 { 00:20:49.896 "cntlid": 11, 00:20:49.896 "qid": 0, 00:20:49.896 "state": "enabled", 00:20:49.896 "thread": "nvmf_tgt_poll_group_000", 00:20:49.896 "listen_address": { 00:20:49.896 "trtype": "TCP", 00:20:49.896 "adrfam": "IPv4", 00:20:49.896 "traddr": "10.0.0.2", 00:20:49.896 "trsvcid": "4420" 00:20:49.896 }, 00:20:49.896 "peer_address": { 00:20:49.896 "trtype": "TCP", 00:20:49.896 "adrfam": "IPv4", 00:20:49.896 "traddr": "10.0.0.1", 00:20:49.896 "trsvcid": "42020" 00:20:49.896 }, 00:20:49.896 "auth": { 00:20:49.896 "state": "completed", 00:20:49.896 "digest": "sha256", 00:20:49.896 "dhgroup": "ffdhe2048" 00:20:49.896 } 00:20:49.896 } 00:20:49.896 ]' 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.896 12:10:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.155 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:20:50.724 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.724 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.724 12:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.724 12:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.724 12:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.724 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.724 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:50.724 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.983 12:10:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.242 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.242 { 00:20:51.242 "cntlid": 13, 00:20:51.242 "qid": 0, 00:20:51.242 "state": "enabled", 00:20:51.242 "thread": "nvmf_tgt_poll_group_000", 00:20:51.242 "listen_address": { 00:20:51.242 "trtype": "TCP", 00:20:51.242 "adrfam": "IPv4", 00:20:51.242 "traddr": "10.0.0.2", 00:20:51.242 "trsvcid": "4420" 00:20:51.242 }, 00:20:51.242 "peer_address": { 00:20:51.242 "trtype": "TCP", 00:20:51.242 "adrfam": "IPv4", 00:20:51.242 "traddr": "10.0.0.1", 00:20:51.242 "trsvcid": "42042" 00:20:51.242 }, 00:20:51.242 "auth": { 00:20:51.242 "state": "completed", 00:20:51.242 "digest": "sha256", 00:20:51.242 "dhgroup": "ffdhe2048" 00:20:51.242 } 00:20:51.242 } 00:20:51.242 ]' 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.242 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.501 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:51.501 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.501 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.501 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.501 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.761 12:10:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.329 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.587 00:20:52.587 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.587 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.587 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.846 { 00:20:52.846 "cntlid": 15, 00:20:52.846 "qid": 0, 00:20:52.846 "state": "enabled", 00:20:52.846 "thread": "nvmf_tgt_poll_group_000", 00:20:52.846 "listen_address": { 00:20:52.846 "trtype": "TCP", 00:20:52.846 "adrfam": "IPv4", 00:20:52.846 "traddr": "10.0.0.2", 00:20:52.846 "trsvcid": "4420" 00:20:52.846 }, 00:20:52.846 "peer_address": { 00:20:52.846 "trtype": "TCP", 00:20:52.846 "adrfam": "IPv4", 00:20:52.846 "traddr": "10.0.0.1", 00:20:52.846 "trsvcid": "42060" 00:20:52.846 }, 00:20:52.846 "auth": { 00:20:52.846 "state": "completed", 00:20:52.846 "digest": "sha256", 00:20:52.846 "dhgroup": "ffdhe2048" 00:20:52.846 } 00:20:52.846 } 00:20:52.846 ]' 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.846 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.105 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.105 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.105 12:10:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.105 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:20:53.674 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.674 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:53.674 12:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.674 12:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.674 12:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.674 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.674 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.674 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:53.674 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.933 12:10:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.193 00:20:54.193 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.193 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.193 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.452 { 00:20:54.452 "cntlid": 17, 00:20:54.452 "qid": 0, 00:20:54.452 "state": "enabled", 00:20:54.452 "thread": "nvmf_tgt_poll_group_000", 00:20:54.452 "listen_address": { 00:20:54.452 "trtype": "TCP", 00:20:54.452 "adrfam": "IPv4", 00:20:54.452 "traddr": "10.0.0.2", 00:20:54.452 "trsvcid": "4420" 00:20:54.452 }, 00:20:54.452 "peer_address": { 00:20:54.452 "trtype": "TCP", 00:20:54.452 "adrfam": "IPv4", 00:20:54.452 "traddr": "10.0.0.1", 00:20:54.452 "trsvcid": "42086" 00:20:54.452 }, 00:20:54.452 "auth": { 00:20:54.452 "state": "completed", 00:20:54.452 "digest": "sha256", 00:20:54.452 "dhgroup": "ffdhe3072" 00:20:54.452 } 00:20:54.452 } 00:20:54.452 ]' 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.452 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.453 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.716 12:10:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:20:55.285 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.285 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:55.285 12:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.285 12:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.285 12:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.285 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.285 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.285 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.544 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.544 00:20:55.802 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.802 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.802 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.802 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.802 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.802 12:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.802 12:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.802 12:10:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.802 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.803 { 00:20:55.803 "cntlid": 19, 00:20:55.803 "qid": 0, 00:20:55.803 "state": "enabled", 00:20:55.803 "thread": "nvmf_tgt_poll_group_000", 00:20:55.803 "listen_address": { 00:20:55.803 "trtype": "TCP", 00:20:55.803 "adrfam": "IPv4", 00:20:55.803 "traddr": "10.0.0.2", 00:20:55.803 "trsvcid": "4420" 00:20:55.803 }, 00:20:55.803 "peer_address": { 00:20:55.803 "trtype": "TCP", 00:20:55.803 "adrfam": "IPv4", 00:20:55.803 "traddr": "10.0.0.1", 00:20:55.803 "trsvcid": "42114" 00:20:55.803 }, 00:20:55.803 "auth": { 00:20:55.803 "state": "completed", 00:20:55.803 "digest": "sha256", 00:20:55.803 "dhgroup": "ffdhe3072" 00:20:55.803 } 00:20:55.803 } 00:20:55.803 ]' 00:20:55.803 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.803 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.803 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.062 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:56.062 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.062 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.062 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.062 12:10:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.062 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:20:56.635 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.635 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:56.635 12:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.635 12:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.635 12:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.635 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.635 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:56.635 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.898 12:10:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.156 00:20:57.156 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.156 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.156 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.415 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.415 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.416 { 00:20:57.416 "cntlid": 21, 00:20:57.416 "qid": 0, 00:20:57.416 "state": "enabled", 00:20:57.416 "thread": "nvmf_tgt_poll_group_000", 00:20:57.416 "listen_address": { 00:20:57.416 "trtype": "TCP", 00:20:57.416 "adrfam": "IPv4", 00:20:57.416 "traddr": "10.0.0.2", 00:20:57.416 "trsvcid": "4420" 00:20:57.416 }, 00:20:57.416 "peer_address": { 00:20:57.416 "trtype": "TCP", 00:20:57.416 "adrfam": "IPv4", 00:20:57.416 "traddr": "10.0.0.1", 00:20:57.416 "trsvcid": "48624" 00:20:57.416 }, 00:20:57.416 "auth": { 00:20:57.416 "state": "completed", 00:20:57.416 "digest": "sha256", 00:20:57.416 "dhgroup": "ffdhe3072" 00:20:57.416 } 00:20:57.416 } 00:20:57.416 ]' 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.416 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.675 12:10:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:20:58.244 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.244 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:58.244 12:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.244 12:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.244 12:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.244 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.244 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:58.244 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.503 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.762 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.762 { 00:20:58.762 "cntlid": 23, 00:20:58.762 "qid": 0, 00:20:58.762 "state": "enabled", 00:20:58.762 "thread": "nvmf_tgt_poll_group_000", 00:20:58.762 "listen_address": { 00:20:58.762 "trtype": "TCP", 00:20:58.762 "adrfam": "IPv4", 00:20:58.762 "traddr": "10.0.0.2", 00:20:58.762 "trsvcid": "4420" 00:20:58.762 }, 00:20:58.762 "peer_address": { 00:20:58.762 "trtype": "TCP", 00:20:58.762 "adrfam": "IPv4", 00:20:58.762 "traddr": "10.0.0.1", 00:20:58.762 "trsvcid": "48646" 00:20:58.762 }, 00:20:58.762 "auth": { 00:20:58.762 "state": "completed", 00:20:58.762 "digest": "sha256", 00:20:58.762 "dhgroup": "ffdhe3072" 00:20:58.762 } 00:20:58.762 } 00:20:58.762 ]' 00:20:58.762 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.021 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.021 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.021 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.021 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.021 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.021 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.021 12:10:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.280 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.848 12:10:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.108 00:21:00.108 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.108 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.108 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.367 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.367 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.367 12:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.367 12:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.367 12:10:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.367 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.367 { 00:21:00.367 "cntlid": 25, 00:21:00.367 "qid": 0, 00:21:00.367 "state": "enabled", 00:21:00.367 "thread": "nvmf_tgt_poll_group_000", 00:21:00.367 "listen_address": { 00:21:00.367 "trtype": "TCP", 00:21:00.367 "adrfam": "IPv4", 00:21:00.367 "traddr": "10.0.0.2", 00:21:00.367 "trsvcid": "4420" 00:21:00.367 }, 00:21:00.367 "peer_address": { 00:21:00.367 "trtype": "TCP", 00:21:00.367 "adrfam": "IPv4", 00:21:00.367 "traddr": "10.0.0.1", 00:21:00.367 "trsvcid": "48682" 00:21:00.367 }, 00:21:00.367 "auth": { 00:21:00.367 "state": "completed", 00:21:00.367 "digest": "sha256", 00:21:00.367 "dhgroup": "ffdhe4096" 00:21:00.367 } 00:21:00.367 } 00:21:00.367 ]' 00:21:00.367 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.367 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.367 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.626 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.626 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.626 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.626 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.626 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.627 12:10:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:01.194 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.194 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:01.194 12:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.194 12:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.194 12:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.194 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.194 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:01.194 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.453 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.732 00:21:01.732 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.732 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.732 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.989 { 00:21:01.989 "cntlid": 27, 00:21:01.989 "qid": 0, 00:21:01.989 "state": "enabled", 00:21:01.989 "thread": "nvmf_tgt_poll_group_000", 00:21:01.989 "listen_address": { 00:21:01.989 "trtype": "TCP", 00:21:01.989 "adrfam": "IPv4", 00:21:01.989 "traddr": "10.0.0.2", 00:21:01.989 "trsvcid": "4420" 00:21:01.989 }, 00:21:01.989 "peer_address": { 00:21:01.989 "trtype": "TCP", 00:21:01.989 "adrfam": "IPv4", 00:21:01.989 "traddr": "10.0.0.1", 00:21:01.989 "trsvcid": "48704" 00:21:01.989 }, 00:21:01.989 "auth": { 00:21:01.989 "state": "completed", 00:21:01.989 "digest": "sha256", 00:21:01.989 "dhgroup": "ffdhe4096" 00:21:01.989 } 00:21:01.989 } 00:21:01.989 ]' 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.989 12:10:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.247 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:21:02.813 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.813 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:02.813 12:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.813 12:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.813 12:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.813 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.813 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:02.814 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.072 12:10:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.349 00:21:03.349 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.349 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.349 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.620 { 00:21:03.620 "cntlid": 29, 00:21:03.620 "qid": 0, 00:21:03.620 "state": "enabled", 00:21:03.620 "thread": "nvmf_tgt_poll_group_000", 00:21:03.620 "listen_address": { 00:21:03.620 "trtype": "TCP", 00:21:03.620 "adrfam": "IPv4", 00:21:03.620 "traddr": "10.0.0.2", 00:21:03.620 "trsvcid": "4420" 00:21:03.620 }, 00:21:03.620 "peer_address": { 00:21:03.620 "trtype": "TCP", 00:21:03.620 "adrfam": "IPv4", 00:21:03.620 "traddr": "10.0.0.1", 00:21:03.620 "trsvcid": "48730" 00:21:03.620 }, 00:21:03.620 "auth": { 00:21:03.620 "state": "completed", 00:21:03.620 "digest": "sha256", 00:21:03.620 "dhgroup": "ffdhe4096" 00:21:03.620 } 00:21:03.620 } 00:21:03.620 ]' 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.620 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.879 12:10:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.446 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.704 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.963 { 00:21:04.963 "cntlid": 31, 00:21:04.963 "qid": 0, 00:21:04.963 "state": "enabled", 00:21:04.963 "thread": "nvmf_tgt_poll_group_000", 00:21:04.963 "listen_address": { 00:21:04.963 "trtype": "TCP", 00:21:04.963 "adrfam": "IPv4", 00:21:04.963 "traddr": "10.0.0.2", 00:21:04.963 "trsvcid": "4420" 00:21:04.963 }, 00:21:04.963 "peer_address": { 00:21:04.963 "trtype": "TCP", 00:21:04.963 "adrfam": "IPv4", 00:21:04.963 "traddr": "10.0.0.1", 00:21:04.963 "trsvcid": "48758" 00:21:04.963 }, 00:21:04.963 "auth": { 00:21:04.963 "state": "completed", 00:21:04.963 "digest": "sha256", 00:21:04.963 "dhgroup": "ffdhe4096" 00:21:04.963 } 00:21:04.963 } 00:21:04.963 ]' 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.963 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.222 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:05.222 12:10:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.222 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.222 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.222 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.222 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:21:05.789 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.789 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:05.789 12:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.789 12:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.789 12:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.047 12:10:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.614 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.614 { 00:21:06.614 "cntlid": 33, 00:21:06.614 "qid": 0, 00:21:06.614 "state": "enabled", 00:21:06.614 "thread": "nvmf_tgt_poll_group_000", 00:21:06.614 "listen_address": { 00:21:06.614 "trtype": "TCP", 00:21:06.614 "adrfam": "IPv4", 00:21:06.614 "traddr": "10.0.0.2", 00:21:06.614 "trsvcid": "4420" 00:21:06.614 }, 00:21:06.614 "peer_address": { 00:21:06.614 "trtype": "TCP", 00:21:06.614 "adrfam": "IPv4", 00:21:06.614 "traddr": "10.0.0.1", 00:21:06.614 "trsvcid": "48788" 00:21:06.614 }, 00:21:06.614 "auth": { 00:21:06.614 "state": "completed", 00:21:06.614 "digest": "sha256", 00:21:06.614 "dhgroup": "ffdhe6144" 00:21:06.614 } 00:21:06.614 } 00:21:06.614 ]' 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.614 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.872 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.872 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.872 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.872 12:10:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:07.439 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.439 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:07.439 12:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.439 12:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.439 12:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.439 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.439 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:07.439 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.698 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.956 00:21:07.956 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.956 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.956 12:10:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.215 { 00:21:08.215 "cntlid": 35, 00:21:08.215 "qid": 0, 00:21:08.215 "state": "enabled", 00:21:08.215 "thread": "nvmf_tgt_poll_group_000", 00:21:08.215 "listen_address": { 00:21:08.215 "trtype": "TCP", 00:21:08.215 "adrfam": "IPv4", 00:21:08.215 "traddr": "10.0.0.2", 00:21:08.215 "trsvcid": "4420" 00:21:08.215 }, 00:21:08.215 "peer_address": { 00:21:08.215 "trtype": "TCP", 00:21:08.215 "adrfam": "IPv4", 00:21:08.215 "traddr": "10.0.0.1", 00:21:08.215 "trsvcid": "33754" 00:21:08.215 }, 00:21:08.215 "auth": { 00:21:08.215 "state": "completed", 00:21:08.215 "digest": "sha256", 00:21:08.215 "dhgroup": "ffdhe6144" 00:21:08.215 } 00:21:08.215 } 00:21:08.215 ]' 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.215 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.473 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.473 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.473 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.473 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:21:09.040 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.040 12:10:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:09.040 12:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.040 12:10:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.040 12:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.040 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.040 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:09.040 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.299 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.557 00:21:09.557 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.557 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.557 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.815 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.815 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.815 12:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.815 12:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.815 12:10:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.815 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.815 { 00:21:09.815 "cntlid": 37, 00:21:09.815 "qid": 0, 00:21:09.815 "state": "enabled", 00:21:09.815 "thread": "nvmf_tgt_poll_group_000", 00:21:09.815 "listen_address": { 00:21:09.815 "trtype": "TCP", 00:21:09.815 "adrfam": "IPv4", 00:21:09.815 "traddr": "10.0.0.2", 00:21:09.815 "trsvcid": "4420" 00:21:09.815 }, 00:21:09.815 "peer_address": { 00:21:09.815 "trtype": "TCP", 00:21:09.815 "adrfam": "IPv4", 00:21:09.815 "traddr": "10.0.0.1", 00:21:09.815 "trsvcid": "33790" 00:21:09.815 }, 00:21:09.815 "auth": { 00:21:09.815 "state": "completed", 00:21:09.815 "digest": "sha256", 00:21:09.815 "dhgroup": "ffdhe6144" 00:21:09.815 } 00:21:09.815 } 00:21:09.815 ]' 00:21:09.815 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.815 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.815 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.073 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.073 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.073 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.073 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.073 12:10:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.073 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:21:10.638 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.638 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:10.638 12:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.638 12:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.638 12:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.638 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.638 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:10.638 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.895 12:11:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.153 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.410 { 00:21:11.410 "cntlid": 39, 00:21:11.410 "qid": 0, 00:21:11.410 "state": "enabled", 00:21:11.410 "thread": "nvmf_tgt_poll_group_000", 00:21:11.410 "listen_address": { 00:21:11.410 "trtype": "TCP", 00:21:11.410 "adrfam": "IPv4", 00:21:11.410 "traddr": "10.0.0.2", 00:21:11.410 "trsvcid": "4420" 00:21:11.410 }, 00:21:11.410 "peer_address": { 00:21:11.410 "trtype": "TCP", 00:21:11.410 "adrfam": "IPv4", 00:21:11.410 "traddr": "10.0.0.1", 00:21:11.410 "trsvcid": "33830" 00:21:11.410 }, 00:21:11.410 "auth": { 00:21:11.410 "state": "completed", 00:21:11.410 "digest": "sha256", 00:21:11.410 "dhgroup": "ffdhe6144" 00:21:11.410 } 00:21:11.410 } 00:21:11.410 ]' 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.410 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.668 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.668 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.668 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.668 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.668 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.668 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.926 12:11:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:21:12.491 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.491 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.491 12:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.491 12:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.491 12:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.491 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.491 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.491 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.492 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.058 00:21:13.058 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.058 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.058 12:11:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.317 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.317 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.317 12:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.317 12:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.317 12:11:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.317 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.317 { 00:21:13.317 "cntlid": 41, 00:21:13.317 "qid": 0, 00:21:13.317 "state": "enabled", 00:21:13.317 "thread": "nvmf_tgt_poll_group_000", 00:21:13.317 "listen_address": { 00:21:13.317 "trtype": "TCP", 00:21:13.317 "adrfam": "IPv4", 00:21:13.317 "traddr": "10.0.0.2", 00:21:13.317 "trsvcid": "4420" 00:21:13.317 }, 00:21:13.317 "peer_address": { 00:21:13.317 "trtype": "TCP", 00:21:13.317 "adrfam": "IPv4", 00:21:13.317 "traddr": "10.0.0.1", 00:21:13.317 "trsvcid": "33846" 00:21:13.317 }, 00:21:13.317 "auth": { 00:21:13.317 "state": "completed", 00:21:13.317 "digest": "sha256", 00:21:13.317 "dhgroup": "ffdhe8192" 00:21:13.317 } 00:21:13.317 } 00:21:13.317 ]' 00:21:13.317 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.318 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.318 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.318 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.318 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.318 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.318 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.318 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.575 12:11:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:14.139 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.139 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.139 12:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.139 12:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.139 12:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.139 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.139 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:14.139 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:14.397 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:21:14.397 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.397 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:14.397 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:14.397 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:14.397 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.397 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.397 12:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.398 12:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.398 12:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.398 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.398 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.965 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.965 { 00:21:14.965 "cntlid": 43, 00:21:14.965 "qid": 0, 00:21:14.965 "state": "enabled", 00:21:14.965 "thread": "nvmf_tgt_poll_group_000", 00:21:14.965 "listen_address": { 00:21:14.965 "trtype": "TCP", 00:21:14.965 "adrfam": "IPv4", 00:21:14.965 "traddr": "10.0.0.2", 00:21:14.965 "trsvcid": "4420" 00:21:14.965 }, 00:21:14.965 "peer_address": { 00:21:14.965 "trtype": "TCP", 00:21:14.965 "adrfam": "IPv4", 00:21:14.965 "traddr": "10.0.0.1", 00:21:14.965 "trsvcid": "33864" 00:21:14.965 }, 00:21:14.965 "auth": { 00:21:14.965 "state": "completed", 00:21:14.965 "digest": "sha256", 00:21:14.965 "dhgroup": "ffdhe8192" 00:21:14.965 } 00:21:14.965 } 00:21:14.965 ]' 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.965 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.966 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.224 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.224 12:11:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.224 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.224 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.224 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.224 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:21:15.791 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.791 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:15.791 12:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.791 12:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.049 12:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.050 12:11:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.657 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.657 { 00:21:16.657 "cntlid": 45, 00:21:16.657 "qid": 0, 00:21:16.657 "state": "enabled", 00:21:16.657 "thread": "nvmf_tgt_poll_group_000", 00:21:16.657 "listen_address": { 00:21:16.657 "trtype": "TCP", 00:21:16.657 "adrfam": "IPv4", 00:21:16.657 "traddr": "10.0.0.2", 00:21:16.657 "trsvcid": "4420" 00:21:16.657 }, 00:21:16.657 "peer_address": { 00:21:16.657 "trtype": "TCP", 00:21:16.657 "adrfam": "IPv4", 00:21:16.657 "traddr": "10.0.0.1", 00:21:16.657 "trsvcid": "33876" 00:21:16.657 }, 00:21:16.657 "auth": { 00:21:16.657 "state": "completed", 00:21:16.657 "digest": "sha256", 00:21:16.657 "dhgroup": "ffdhe8192" 00:21:16.657 } 00:21:16.657 } 00:21:16.657 ]' 00:21:16.657 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.916 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.916 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.916 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.916 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.916 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.916 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.917 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.176 12:11:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:21:17.745 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.746 12:11:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.314 00:21:18.314 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.314 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.314 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.573 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.573 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.574 { 00:21:18.574 "cntlid": 47, 00:21:18.574 "qid": 0, 00:21:18.574 "state": "enabled", 00:21:18.574 "thread": "nvmf_tgt_poll_group_000", 00:21:18.574 "listen_address": { 00:21:18.574 "trtype": "TCP", 00:21:18.574 "adrfam": "IPv4", 00:21:18.574 "traddr": "10.0.0.2", 00:21:18.574 "trsvcid": "4420" 00:21:18.574 }, 00:21:18.574 "peer_address": { 00:21:18.574 "trtype": "TCP", 00:21:18.574 "adrfam": "IPv4", 00:21:18.574 "traddr": "10.0.0.1", 00:21:18.574 "trsvcid": "43766" 00:21:18.574 }, 00:21:18.574 "auth": { 00:21:18.574 "state": "completed", 00:21:18.574 "digest": "sha256", 00:21:18.574 "dhgroup": "ffdhe8192" 00:21:18.574 } 00:21:18.574 } 00:21:18.574 ]' 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.574 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.833 12:11:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.402 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.661 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.920 00:21:19.920 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.920 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.920 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.920 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.920 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.920 12:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.920 12:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.920 12:11:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.920 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.920 { 00:21:19.920 "cntlid": 49, 00:21:19.920 "qid": 0, 00:21:19.920 "state": "enabled", 00:21:19.920 "thread": "nvmf_tgt_poll_group_000", 00:21:19.920 "listen_address": { 00:21:19.920 "trtype": "TCP", 00:21:19.920 "adrfam": "IPv4", 00:21:19.921 "traddr": "10.0.0.2", 00:21:19.921 "trsvcid": "4420" 00:21:19.921 }, 00:21:19.921 "peer_address": { 00:21:19.921 "trtype": "TCP", 00:21:19.921 "adrfam": "IPv4", 00:21:19.921 "traddr": "10.0.0.1", 00:21:19.921 "trsvcid": "43788" 00:21:19.921 }, 00:21:19.921 "auth": { 00:21:19.921 "state": "completed", 00:21:19.921 "digest": "sha384", 00:21:19.921 "dhgroup": "null" 00:21:19.921 } 00:21:19.921 } 00:21:19.921 ]' 00:21:19.921 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.921 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.921 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.180 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:20.180 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.180 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.180 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.180 12:11:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.439 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.008 12:11:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.266 00:21:21.266 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.266 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.266 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.525 { 00:21:21.525 "cntlid": 51, 00:21:21.525 "qid": 0, 00:21:21.525 "state": "enabled", 00:21:21.525 "thread": "nvmf_tgt_poll_group_000", 00:21:21.525 "listen_address": { 00:21:21.525 "trtype": "TCP", 00:21:21.525 "adrfam": "IPv4", 00:21:21.525 "traddr": "10.0.0.2", 00:21:21.525 "trsvcid": "4420" 00:21:21.525 }, 00:21:21.525 "peer_address": { 00:21:21.525 "trtype": "TCP", 00:21:21.525 "adrfam": "IPv4", 00:21:21.525 "traddr": "10.0.0.1", 00:21:21.525 "trsvcid": "43810" 00:21:21.525 }, 00:21:21.525 "auth": { 00:21:21.525 "state": "completed", 00:21:21.525 "digest": "sha384", 00:21:21.525 "dhgroup": "null" 00:21:21.525 } 00:21:21.525 } 00:21:21.525 ]' 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.525 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.784 12:11:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:21:22.352 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.352 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.352 12:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.352 12:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.352 12:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.352 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.352 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:22.352 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.611 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.612 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.871 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.871 { 00:21:22.871 "cntlid": 53, 00:21:22.871 "qid": 0, 00:21:22.871 "state": "enabled", 00:21:22.871 "thread": "nvmf_tgt_poll_group_000", 00:21:22.871 "listen_address": { 00:21:22.871 "trtype": "TCP", 00:21:22.871 "adrfam": "IPv4", 00:21:22.871 "traddr": "10.0.0.2", 00:21:22.871 "trsvcid": "4420" 00:21:22.871 }, 00:21:22.871 "peer_address": { 00:21:22.871 "trtype": "TCP", 00:21:22.871 "adrfam": "IPv4", 00:21:22.871 "traddr": "10.0.0.1", 00:21:22.871 "trsvcid": "43838" 00:21:22.871 }, 00:21:22.871 "auth": { 00:21:22.871 "state": "completed", 00:21:22.871 "digest": "sha384", 00:21:22.871 "dhgroup": "null" 00:21:22.871 } 00:21:22.871 } 00:21:22.871 ]' 00:21:22.871 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.130 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.130 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.130 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:23.130 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.130 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.130 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.130 12:11:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.395 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.964 12:11:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.223 00:21:24.223 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.223 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.223 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.482 { 00:21:24.482 "cntlid": 55, 00:21:24.482 "qid": 0, 00:21:24.482 "state": "enabled", 00:21:24.482 "thread": "nvmf_tgt_poll_group_000", 00:21:24.482 "listen_address": { 00:21:24.482 "trtype": "TCP", 00:21:24.482 "adrfam": "IPv4", 00:21:24.482 "traddr": "10.0.0.2", 00:21:24.482 "trsvcid": "4420" 00:21:24.482 }, 00:21:24.482 "peer_address": { 00:21:24.482 "trtype": "TCP", 00:21:24.482 "adrfam": "IPv4", 00:21:24.482 "traddr": "10.0.0.1", 00:21:24.482 "trsvcid": "43850" 00:21:24.482 }, 00:21:24.482 "auth": { 00:21:24.482 "state": "completed", 00:21:24.482 "digest": "sha384", 00:21:24.482 "dhgroup": "null" 00:21:24.482 } 00:21:24.482 } 00:21:24.482 ]' 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:24.482 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.741 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.741 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.741 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.741 12:11:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:21:25.307 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.307 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:25.307 12:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.307 12:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.307 12:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.307 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.307 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.307 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.308 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.566 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.826 00:21:25.826 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.826 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.826 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.085 { 00:21:26.085 "cntlid": 57, 00:21:26.085 "qid": 0, 00:21:26.085 "state": "enabled", 00:21:26.085 "thread": "nvmf_tgt_poll_group_000", 00:21:26.085 "listen_address": { 00:21:26.085 "trtype": "TCP", 00:21:26.085 "adrfam": "IPv4", 00:21:26.085 "traddr": "10.0.0.2", 00:21:26.085 "trsvcid": "4420" 00:21:26.085 }, 00:21:26.085 "peer_address": { 00:21:26.085 "trtype": "TCP", 00:21:26.085 "adrfam": "IPv4", 00:21:26.085 "traddr": "10.0.0.1", 00:21:26.085 "trsvcid": "43888" 00:21:26.085 }, 00:21:26.085 "auth": { 00:21:26.085 "state": "completed", 00:21:26.085 "digest": "sha384", 00:21:26.085 "dhgroup": "ffdhe2048" 00:21:26.085 } 00:21:26.085 } 00:21:26.085 ]' 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.085 12:11:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.344 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.913 12:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.173 12:11:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.173 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.173 12:11:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.173 00:21:27.173 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.173 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.173 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.432 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.432 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.432 12:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.432 12:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.432 12:11:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.432 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.432 { 00:21:27.432 "cntlid": 59, 00:21:27.432 "qid": 0, 00:21:27.432 "state": "enabled", 00:21:27.432 "thread": "nvmf_tgt_poll_group_000", 00:21:27.432 "listen_address": { 00:21:27.432 "trtype": "TCP", 00:21:27.432 "adrfam": "IPv4", 00:21:27.432 "traddr": "10.0.0.2", 00:21:27.432 "trsvcid": "4420" 00:21:27.432 }, 00:21:27.432 "peer_address": { 00:21:27.432 "trtype": "TCP", 00:21:27.432 "adrfam": "IPv4", 00:21:27.432 "traddr": "10.0.0.1", 00:21:27.432 "trsvcid": "37524" 00:21:27.432 }, 00:21:27.433 "auth": { 00:21:27.433 "state": "completed", 00:21:27.433 "digest": "sha384", 00:21:27.433 "dhgroup": "ffdhe2048" 00:21:27.433 } 00:21:27.433 } 00:21:27.433 ]' 00:21:27.433 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.433 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.433 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.433 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:27.433 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.692 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.692 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.692 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.692 12:11:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:21:28.261 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.261 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:28.261 12:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.261 12:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.261 12:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.261 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.261 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.261 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.520 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.779 00:21:28.779 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.779 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.779 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.038 { 00:21:29.038 "cntlid": 61, 00:21:29.038 "qid": 0, 00:21:29.038 "state": "enabled", 00:21:29.038 "thread": "nvmf_tgt_poll_group_000", 00:21:29.038 "listen_address": { 00:21:29.038 "trtype": "TCP", 00:21:29.038 "adrfam": "IPv4", 00:21:29.038 "traddr": "10.0.0.2", 00:21:29.038 "trsvcid": "4420" 00:21:29.038 }, 00:21:29.038 "peer_address": { 00:21:29.038 "trtype": "TCP", 00:21:29.038 "adrfam": "IPv4", 00:21:29.038 "traddr": "10.0.0.1", 00:21:29.038 "trsvcid": "37560" 00:21:29.038 }, 00:21:29.038 "auth": { 00:21:29.038 "state": "completed", 00:21:29.038 "digest": "sha384", 00:21:29.038 "dhgroup": "ffdhe2048" 00:21:29.038 } 00:21:29.038 } 00:21:29.038 ]' 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.038 12:11:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.298 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:21:29.866 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.866 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.866 12:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.866 12:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.866 12:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.866 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.866 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:29.866 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.125 12:11:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.384 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.384 { 00:21:30.384 "cntlid": 63, 00:21:30.384 "qid": 0, 00:21:30.384 "state": "enabled", 00:21:30.384 "thread": "nvmf_tgt_poll_group_000", 00:21:30.384 "listen_address": { 00:21:30.384 "trtype": "TCP", 00:21:30.384 "adrfam": "IPv4", 00:21:30.384 "traddr": "10.0.0.2", 00:21:30.384 "trsvcid": "4420" 00:21:30.384 }, 00:21:30.384 "peer_address": { 00:21:30.384 "trtype": "TCP", 00:21:30.384 "adrfam": "IPv4", 00:21:30.384 "traddr": "10.0.0.1", 00:21:30.384 "trsvcid": "37582" 00:21:30.384 }, 00:21:30.384 "auth": { 00:21:30.384 "state": "completed", 00:21:30.384 "digest": "sha384", 00:21:30.384 "dhgroup": "ffdhe2048" 00:21:30.384 } 00:21:30.384 } 00:21:30.384 ]' 00:21:30.384 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.643 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.643 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.643 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:30.643 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.643 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.643 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.643 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.900 12:11:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.464 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.721 00:21:31.721 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.721 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.721 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.978 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.978 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.978 12:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.978 12:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.978 12:11:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.978 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.978 { 00:21:31.978 "cntlid": 65, 00:21:31.978 "qid": 0, 00:21:31.978 "state": "enabled", 00:21:31.978 "thread": "nvmf_tgt_poll_group_000", 00:21:31.978 "listen_address": { 00:21:31.978 "trtype": "TCP", 00:21:31.978 "adrfam": "IPv4", 00:21:31.978 "traddr": "10.0.0.2", 00:21:31.978 "trsvcid": "4420" 00:21:31.978 }, 00:21:31.978 "peer_address": { 00:21:31.978 "trtype": "TCP", 00:21:31.978 "adrfam": "IPv4", 00:21:31.978 "traddr": "10.0.0.1", 00:21:31.978 "trsvcid": "37606" 00:21:31.978 }, 00:21:31.978 "auth": { 00:21:31.978 "state": "completed", 00:21:31.978 "digest": "sha384", 00:21:31.978 "dhgroup": "ffdhe3072" 00:21:31.978 } 00:21:31.978 } 00:21:31.978 ]' 00:21:31.978 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.978 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.978 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.979 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.979 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.235 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.235 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.235 12:11:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.235 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:32.801 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.801 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.801 12:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.801 12:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.801 12:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.801 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.801 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:32.801 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.059 12:11:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.317 00:21:33.317 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.317 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.317 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.575 { 00:21:33.575 "cntlid": 67, 00:21:33.575 "qid": 0, 00:21:33.575 "state": "enabled", 00:21:33.575 "thread": "nvmf_tgt_poll_group_000", 00:21:33.575 "listen_address": { 00:21:33.575 "trtype": "TCP", 00:21:33.575 "adrfam": "IPv4", 00:21:33.575 "traddr": "10.0.0.2", 00:21:33.575 "trsvcid": "4420" 00:21:33.575 }, 00:21:33.575 "peer_address": { 00:21:33.575 "trtype": "TCP", 00:21:33.575 "adrfam": "IPv4", 00:21:33.575 "traddr": "10.0.0.1", 00:21:33.575 "trsvcid": "37640" 00:21:33.575 }, 00:21:33.575 "auth": { 00:21:33.575 "state": "completed", 00:21:33.575 "digest": "sha384", 00:21:33.575 "dhgroup": "ffdhe3072" 00:21:33.575 } 00:21:33.575 } 00:21:33.575 ]' 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.575 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.576 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.576 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.833 12:11:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:21:34.399 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.399 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.399 12:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.399 12:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.399 12:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.399 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.399 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:34.399 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:34.671 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:34.671 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.671 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:34.671 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:34.671 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:34.671 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.672 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.672 12:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.672 12:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.672 12:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.672 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.672 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.957 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.957 { 00:21:34.957 "cntlid": 69, 00:21:34.957 "qid": 0, 00:21:34.957 "state": "enabled", 00:21:34.957 "thread": "nvmf_tgt_poll_group_000", 00:21:34.957 "listen_address": { 00:21:34.957 "trtype": "TCP", 00:21:34.957 "adrfam": "IPv4", 00:21:34.957 "traddr": "10.0.0.2", 00:21:34.957 "trsvcid": "4420" 00:21:34.957 }, 00:21:34.957 "peer_address": { 00:21:34.957 "trtype": "TCP", 00:21:34.957 "adrfam": "IPv4", 00:21:34.957 "traddr": "10.0.0.1", 00:21:34.957 "trsvcid": "37668" 00:21:34.957 }, 00:21:34.957 "auth": { 00:21:34.957 "state": "completed", 00:21:34.957 "digest": "sha384", 00:21:34.957 "dhgroup": "ffdhe3072" 00:21:34.957 } 00:21:34.957 } 00:21:34.957 ]' 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.957 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.216 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:35.216 12:11:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.216 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.216 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.216 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.216 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:21:35.784 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.784 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.784 12:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.784 12:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.784 12:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.784 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.784 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:35.784 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.043 12:11:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.302 00:21:36.302 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.302 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.302 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.561 { 00:21:36.561 "cntlid": 71, 00:21:36.561 "qid": 0, 00:21:36.561 "state": "enabled", 00:21:36.561 "thread": "nvmf_tgt_poll_group_000", 00:21:36.561 "listen_address": { 00:21:36.561 "trtype": "TCP", 00:21:36.561 "adrfam": "IPv4", 00:21:36.561 "traddr": "10.0.0.2", 00:21:36.561 "trsvcid": "4420" 00:21:36.561 }, 00:21:36.561 "peer_address": { 00:21:36.561 "trtype": "TCP", 00:21:36.561 "adrfam": "IPv4", 00:21:36.561 "traddr": "10.0.0.1", 00:21:36.561 "trsvcid": "37694" 00:21:36.561 }, 00:21:36.561 "auth": { 00:21:36.561 "state": "completed", 00:21:36.561 "digest": "sha384", 00:21:36.561 "dhgroup": "ffdhe3072" 00:21:36.561 } 00:21:36.561 } 00:21:36.561 ]' 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.561 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.819 12:11:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:21:37.388 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.388 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.388 12:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.388 12:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.388 12:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.388 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.388 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.388 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:37.388 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.648 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.907 00:21:37.907 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.907 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.907 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.166 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.166 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.166 12:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.166 12:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.166 12:11:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.166 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.166 { 00:21:38.166 "cntlid": 73, 00:21:38.166 "qid": 0, 00:21:38.166 "state": "enabled", 00:21:38.166 "thread": "nvmf_tgt_poll_group_000", 00:21:38.166 "listen_address": { 00:21:38.166 "trtype": "TCP", 00:21:38.166 "adrfam": "IPv4", 00:21:38.166 "traddr": "10.0.0.2", 00:21:38.166 "trsvcid": "4420" 00:21:38.166 }, 00:21:38.166 "peer_address": { 00:21:38.166 "trtype": "TCP", 00:21:38.166 "adrfam": "IPv4", 00:21:38.166 "traddr": "10.0.0.1", 00:21:38.166 "trsvcid": "47580" 00:21:38.166 }, 00:21:38.166 "auth": { 00:21:38.166 "state": "completed", 00:21:38.166 "digest": "sha384", 00:21:38.166 "dhgroup": "ffdhe4096" 00:21:38.166 } 00:21:38.166 } 00:21:38.166 ]' 00:21:38.166 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.166 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.166 12:11:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.166 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.166 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.166 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.166 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.166 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.425 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:38.993 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.993 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.993 12:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.993 12:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.993 12:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.993 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.993 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:38.993 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:39.252 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:39.252 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.252 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:39.252 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:39.252 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:39.252 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.252 12:11:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.252 12:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.252 12:11:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.252 12:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.252 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.253 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.512 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.512 { 00:21:39.512 "cntlid": 75, 00:21:39.512 "qid": 0, 00:21:39.512 "state": "enabled", 00:21:39.512 "thread": "nvmf_tgt_poll_group_000", 00:21:39.512 "listen_address": { 00:21:39.512 "trtype": "TCP", 00:21:39.512 "adrfam": "IPv4", 00:21:39.512 "traddr": "10.0.0.2", 00:21:39.512 "trsvcid": "4420" 00:21:39.512 }, 00:21:39.512 "peer_address": { 00:21:39.512 "trtype": "TCP", 00:21:39.512 "adrfam": "IPv4", 00:21:39.512 "traddr": "10.0.0.1", 00:21:39.512 "trsvcid": "47602" 00:21:39.512 }, 00:21:39.512 "auth": { 00:21:39.512 "state": "completed", 00:21:39.512 "digest": "sha384", 00:21:39.512 "dhgroup": "ffdhe4096" 00:21:39.512 } 00:21:39.512 } 00:21:39.512 ]' 00:21:39.512 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.771 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.771 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.771 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:39.771 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.771 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.771 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.771 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.029 12:11:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.596 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.856 00:21:40.856 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.856 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.856 12:11:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.114 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.114 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.114 12:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.114 12:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.114 12:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.114 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.114 { 00:21:41.114 "cntlid": 77, 00:21:41.114 "qid": 0, 00:21:41.114 "state": "enabled", 00:21:41.114 "thread": "nvmf_tgt_poll_group_000", 00:21:41.114 "listen_address": { 00:21:41.114 "trtype": "TCP", 00:21:41.114 "adrfam": "IPv4", 00:21:41.114 "traddr": "10.0.0.2", 00:21:41.114 "trsvcid": "4420" 00:21:41.114 }, 00:21:41.114 "peer_address": { 00:21:41.114 "trtype": "TCP", 00:21:41.114 "adrfam": "IPv4", 00:21:41.114 "traddr": "10.0.0.1", 00:21:41.114 "trsvcid": "47630" 00:21:41.114 }, 00:21:41.114 "auth": { 00:21:41.114 "state": "completed", 00:21:41.114 "digest": "sha384", 00:21:41.114 "dhgroup": "ffdhe4096" 00:21:41.114 } 00:21:41.114 } 00:21:41.114 ]' 00:21:41.114 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.114 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.114 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.372 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.372 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.372 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.372 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.372 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.372 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:21:41.937 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.937 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.937 12:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.937 12:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.196 12:11:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.196 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.196 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:42.196 12:11:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.196 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:42.453 00:21:42.453 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.453 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.453 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.712 { 00:21:42.712 "cntlid": 79, 00:21:42.712 "qid": 0, 00:21:42.712 "state": "enabled", 00:21:42.712 "thread": "nvmf_tgt_poll_group_000", 00:21:42.712 "listen_address": { 00:21:42.712 "trtype": "TCP", 00:21:42.712 "adrfam": "IPv4", 00:21:42.712 "traddr": "10.0.0.2", 00:21:42.712 "trsvcid": "4420" 00:21:42.712 }, 00:21:42.712 "peer_address": { 00:21:42.712 "trtype": "TCP", 00:21:42.712 "adrfam": "IPv4", 00:21:42.712 "traddr": "10.0.0.1", 00:21:42.712 "trsvcid": "47654" 00:21:42.712 }, 00:21:42.712 "auth": { 00:21:42.712 "state": "completed", 00:21:42.712 "digest": "sha384", 00:21:42.712 "dhgroup": "ffdhe4096" 00:21:42.712 } 00:21:42.712 } 00:21:42.712 ]' 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.712 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.970 12:11:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:21:43.536 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.536 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:43.536 12:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.536 12:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.536 12:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.536 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.536 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.536 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:43.536 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.795 12:11:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.053 00:21:44.053 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.053 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.053 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.312 { 00:21:44.312 "cntlid": 81, 00:21:44.312 "qid": 0, 00:21:44.312 "state": "enabled", 00:21:44.312 "thread": "nvmf_tgt_poll_group_000", 00:21:44.312 "listen_address": { 00:21:44.312 "trtype": "TCP", 00:21:44.312 "adrfam": "IPv4", 00:21:44.312 "traddr": "10.0.0.2", 00:21:44.312 "trsvcid": "4420" 00:21:44.312 }, 00:21:44.312 "peer_address": { 00:21:44.312 "trtype": "TCP", 00:21:44.312 "adrfam": "IPv4", 00:21:44.312 "traddr": "10.0.0.1", 00:21:44.312 "trsvcid": "47678" 00:21:44.312 }, 00:21:44.312 "auth": { 00:21:44.312 "state": "completed", 00:21:44.312 "digest": "sha384", 00:21:44.312 "dhgroup": "ffdhe6144" 00:21:44.312 } 00:21:44.312 } 00:21:44.312 ]' 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:44.312 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.571 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.571 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.571 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.571 12:11:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:45.137 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.137 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.137 12:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.137 12:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.137 12:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.137 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.137 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:45.137 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.395 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.654 00:21:45.654 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.654 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.654 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.913 { 00:21:45.913 "cntlid": 83, 00:21:45.913 "qid": 0, 00:21:45.913 "state": "enabled", 00:21:45.913 "thread": "nvmf_tgt_poll_group_000", 00:21:45.913 "listen_address": { 00:21:45.913 "trtype": "TCP", 00:21:45.913 "adrfam": "IPv4", 00:21:45.913 "traddr": "10.0.0.2", 00:21:45.913 "trsvcid": "4420" 00:21:45.913 }, 00:21:45.913 "peer_address": { 00:21:45.913 "trtype": "TCP", 00:21:45.913 "adrfam": "IPv4", 00:21:45.913 "traddr": "10.0.0.1", 00:21:45.913 "trsvcid": "47712" 00:21:45.913 }, 00:21:45.913 "auth": { 00:21:45.913 "state": "completed", 00:21:45.913 "digest": "sha384", 00:21:45.913 "dhgroup": "ffdhe6144" 00:21:45.913 } 00:21:45.913 } 00:21:45.913 ]' 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.913 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.172 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.172 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.172 12:11:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.172 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:21:46.741 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.741 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.741 12:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.741 12:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.741 12:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.741 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.741 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:46.741 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.999 12:11:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.258 00:21:47.258 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.258 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.258 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.518 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.518 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.518 12:11:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.518 12:11:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.518 12:11:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.518 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.518 { 00:21:47.518 "cntlid": 85, 00:21:47.518 "qid": 0, 00:21:47.518 "state": "enabled", 00:21:47.518 "thread": "nvmf_tgt_poll_group_000", 00:21:47.518 "listen_address": { 00:21:47.518 "trtype": "TCP", 00:21:47.518 "adrfam": "IPv4", 00:21:47.518 "traddr": "10.0.0.2", 00:21:47.518 "trsvcid": "4420" 00:21:47.518 }, 00:21:47.518 "peer_address": { 00:21:47.518 "trtype": "TCP", 00:21:47.518 "adrfam": "IPv4", 00:21:47.518 "traddr": "10.0.0.1", 00:21:47.518 "trsvcid": "44284" 00:21:47.518 }, 00:21:47.518 "auth": { 00:21:47.518 "state": "completed", 00:21:47.518 "digest": "sha384", 00:21:47.518 "dhgroup": "ffdhe6144" 00:21:47.518 } 00:21:47.518 } 00:21:47.518 ]' 00:21:47.518 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.518 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.518 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.777 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.777 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.777 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.777 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.777 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.036 12:11:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.602 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.860 00:21:49.120 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.120 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.120 12:11:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.120 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.120 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.120 12:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.120 12:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.120 12:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.120 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.120 { 00:21:49.120 "cntlid": 87, 00:21:49.120 "qid": 0, 00:21:49.120 "state": "enabled", 00:21:49.120 "thread": "nvmf_tgt_poll_group_000", 00:21:49.120 "listen_address": { 00:21:49.120 "trtype": "TCP", 00:21:49.120 "adrfam": "IPv4", 00:21:49.120 "traddr": "10.0.0.2", 00:21:49.120 "trsvcid": "4420" 00:21:49.120 }, 00:21:49.120 "peer_address": { 00:21:49.120 "trtype": "TCP", 00:21:49.120 "adrfam": "IPv4", 00:21:49.120 "traddr": "10.0.0.1", 00:21:49.120 "trsvcid": "44308" 00:21:49.120 }, 00:21:49.120 "auth": { 00:21:49.120 "state": "completed", 00:21:49.120 "digest": "sha384", 00:21:49.120 "dhgroup": "ffdhe6144" 00:21:49.120 } 00:21:49.120 } 00:21:49.120 ]' 00:21:49.120 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.120 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.120 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.379 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.379 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.379 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.379 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.379 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.639 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:21:50.207 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.207 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.207 12:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.207 12:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.207 12:11:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.207 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.207 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.207 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:50.207 12:11:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.207 12:11:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.208 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.208 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:50.790 00:21:50.790 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.790 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.790 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:51.048 { 00:21:51.048 "cntlid": 89, 00:21:51.048 "qid": 0, 00:21:51.048 "state": "enabled", 00:21:51.048 "thread": "nvmf_tgt_poll_group_000", 00:21:51.048 "listen_address": { 00:21:51.048 "trtype": "TCP", 00:21:51.048 "adrfam": "IPv4", 00:21:51.048 "traddr": "10.0.0.2", 00:21:51.048 "trsvcid": "4420" 00:21:51.048 }, 00:21:51.048 "peer_address": { 00:21:51.048 "trtype": "TCP", 00:21:51.048 "adrfam": "IPv4", 00:21:51.048 "traddr": "10.0.0.1", 00:21:51.048 "trsvcid": "44332" 00:21:51.048 }, 00:21:51.048 "auth": { 00:21:51.048 "state": "completed", 00:21:51.048 "digest": "sha384", 00:21:51.048 "dhgroup": "ffdhe8192" 00:21:51.048 } 00:21:51.048 } 00:21:51.048 ]' 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.048 12:11:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.305 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:51.868 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.868 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.868 12:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.868 12:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.868 12:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.868 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.868 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:51.868 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.127 12:11:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.385 00:21:52.385 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.385 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.385 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.644 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.644 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.644 12:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.644 12:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.644 12:11:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.644 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.644 { 00:21:52.644 "cntlid": 91, 00:21:52.644 "qid": 0, 00:21:52.644 "state": "enabled", 00:21:52.644 "thread": "nvmf_tgt_poll_group_000", 00:21:52.644 "listen_address": { 00:21:52.644 "trtype": "TCP", 00:21:52.644 "adrfam": "IPv4", 00:21:52.644 "traddr": "10.0.0.2", 00:21:52.644 "trsvcid": "4420" 00:21:52.644 }, 00:21:52.644 "peer_address": { 00:21:52.644 "trtype": "TCP", 00:21:52.644 "adrfam": "IPv4", 00:21:52.644 "traddr": "10.0.0.1", 00:21:52.644 "trsvcid": "44358" 00:21:52.644 }, 00:21:52.644 "auth": { 00:21:52.644 "state": "completed", 00:21:52.644 "digest": "sha384", 00:21:52.644 "dhgroup": "ffdhe8192" 00:21:52.644 } 00:21:52.644 } 00:21:52.644 ]' 00:21:52.644 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.644 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.644 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.903 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.903 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.903 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.903 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.903 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.903 12:11:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:21:53.470 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.470 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.470 12:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.470 12:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.739 12:11:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.306 00:21:54.306 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.306 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.307 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.571 { 00:21:54.571 "cntlid": 93, 00:21:54.571 "qid": 0, 00:21:54.571 "state": "enabled", 00:21:54.571 "thread": "nvmf_tgt_poll_group_000", 00:21:54.571 "listen_address": { 00:21:54.571 "trtype": "TCP", 00:21:54.571 "adrfam": "IPv4", 00:21:54.571 "traddr": "10.0.0.2", 00:21:54.571 "trsvcid": "4420" 00:21:54.571 }, 00:21:54.571 "peer_address": { 00:21:54.571 "trtype": "TCP", 00:21:54.571 "adrfam": "IPv4", 00:21:54.571 "traddr": "10.0.0.1", 00:21:54.571 "trsvcid": "44394" 00:21:54.571 }, 00:21:54.571 "auth": { 00:21:54.571 "state": "completed", 00:21:54.571 "digest": "sha384", 00:21:54.571 "dhgroup": "ffdhe8192" 00:21:54.571 } 00:21:54.571 } 00:21:54.571 ]' 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.571 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.832 12:11:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:21:55.398 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.398 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.398 12:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.398 12:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.398 12:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.398 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.398 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:55.398 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.399 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.966 00:21:55.966 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.966 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.966 12:11:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:56.225 { 00:21:56.225 "cntlid": 95, 00:21:56.225 "qid": 0, 00:21:56.225 "state": "enabled", 00:21:56.225 "thread": "nvmf_tgt_poll_group_000", 00:21:56.225 "listen_address": { 00:21:56.225 "trtype": "TCP", 00:21:56.225 "adrfam": "IPv4", 00:21:56.225 "traddr": "10.0.0.2", 00:21:56.225 "trsvcid": "4420" 00:21:56.225 }, 00:21:56.225 "peer_address": { 00:21:56.225 "trtype": "TCP", 00:21:56.225 "adrfam": "IPv4", 00:21:56.225 "traddr": "10.0.0.1", 00:21:56.225 "trsvcid": "44422" 00:21:56.225 }, 00:21:56.225 "auth": { 00:21:56.225 "state": "completed", 00:21:56.225 "digest": "sha384", 00:21:56.225 "dhgroup": "ffdhe8192" 00:21:56.225 } 00:21:56.225 } 00:21:56.225 ]' 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.225 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.484 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.052 12:11:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.312 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.571 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.571 { 00:21:57.571 "cntlid": 97, 00:21:57.571 "qid": 0, 00:21:57.571 "state": "enabled", 00:21:57.571 "thread": "nvmf_tgt_poll_group_000", 00:21:57.571 "listen_address": { 00:21:57.571 "trtype": "TCP", 00:21:57.571 "adrfam": "IPv4", 00:21:57.571 "traddr": "10.0.0.2", 00:21:57.571 "trsvcid": "4420" 00:21:57.571 }, 00:21:57.571 "peer_address": { 00:21:57.571 "trtype": "TCP", 00:21:57.571 "adrfam": "IPv4", 00:21:57.571 "traddr": "10.0.0.1", 00:21:57.571 "trsvcid": "37446" 00:21:57.571 }, 00:21:57.571 "auth": { 00:21:57.571 "state": "completed", 00:21:57.571 "digest": "sha512", 00:21:57.571 "dhgroup": "null" 00:21:57.571 } 00:21:57.571 } 00:21:57.571 ]' 00:21:57.571 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.830 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.830 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.830 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:57.830 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.830 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.830 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.830 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.090 12:11:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:21:58.658 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.658 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:58.658 12:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.658 12:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.658 12:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.658 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.658 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.659 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.917 00:21:58.917 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.917 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.917 12:11:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.176 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.176 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.176 12:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.176 12:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.177 { 00:21:59.177 "cntlid": 99, 00:21:59.177 "qid": 0, 00:21:59.177 "state": "enabled", 00:21:59.177 "thread": "nvmf_tgt_poll_group_000", 00:21:59.177 "listen_address": { 00:21:59.177 "trtype": "TCP", 00:21:59.177 "adrfam": "IPv4", 00:21:59.177 "traddr": "10.0.0.2", 00:21:59.177 "trsvcid": "4420" 00:21:59.177 }, 00:21:59.177 "peer_address": { 00:21:59.177 "trtype": "TCP", 00:21:59.177 "adrfam": "IPv4", 00:21:59.177 "traddr": "10.0.0.1", 00:21:59.177 "trsvcid": "37474" 00:21:59.177 }, 00:21:59.177 "auth": { 00:21:59.177 "state": "completed", 00:21:59.177 "digest": "sha512", 00:21:59.177 "dhgroup": "null" 00:21:59.177 } 00:21:59.177 } 00:21:59.177 ]' 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.177 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.436 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:22:00.005 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.005 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:00.005 12:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.005 12:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.005 12:11:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.005 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.005 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:00.005 12:11:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.264 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.522 00:22:00.522 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.522 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.523 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.782 { 00:22:00.782 "cntlid": 101, 00:22:00.782 "qid": 0, 00:22:00.782 "state": "enabled", 00:22:00.782 "thread": "nvmf_tgt_poll_group_000", 00:22:00.782 "listen_address": { 00:22:00.782 "trtype": "TCP", 00:22:00.782 "adrfam": "IPv4", 00:22:00.782 "traddr": "10.0.0.2", 00:22:00.782 "trsvcid": "4420" 00:22:00.782 }, 00:22:00.782 "peer_address": { 00:22:00.782 "trtype": "TCP", 00:22:00.782 "adrfam": "IPv4", 00:22:00.782 "traddr": "10.0.0.1", 00:22:00.782 "trsvcid": "37504" 00:22:00.782 }, 00:22:00.782 "auth": { 00:22:00.782 "state": "completed", 00:22:00.782 "digest": "sha512", 00:22:00.782 "dhgroup": "null" 00:22:00.782 } 00:22:00.782 } 00:22:00.782 ]' 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.782 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.041 12:11:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:22:01.609 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.609 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.609 12:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.609 12:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.609 12:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.609 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.609 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:01.610 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:01.610 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:01.610 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.610 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.610 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:01.610 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.869 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.869 12:11:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.128 { 00:22:02.128 "cntlid": 103, 00:22:02.128 "qid": 0, 00:22:02.128 "state": "enabled", 00:22:02.128 "thread": "nvmf_tgt_poll_group_000", 00:22:02.128 "listen_address": { 00:22:02.128 "trtype": "TCP", 00:22:02.128 "adrfam": "IPv4", 00:22:02.128 "traddr": "10.0.0.2", 00:22:02.128 "trsvcid": "4420" 00:22:02.128 }, 00:22:02.128 "peer_address": { 00:22:02.128 "trtype": "TCP", 00:22:02.128 "adrfam": "IPv4", 00:22:02.128 "traddr": "10.0.0.1", 00:22:02.128 "trsvcid": "37538" 00:22:02.128 }, 00:22:02.128 "auth": { 00:22:02.128 "state": "completed", 00:22:02.128 "digest": "sha512", 00:22:02.128 "dhgroup": "null" 00:22:02.128 } 00:22:02.128 } 00:22:02.128 ]' 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:02.128 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.387 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.387 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.387 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.387 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:22:02.955 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.955 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.955 12:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.955 12:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.955 12:11:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.955 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.955 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.955 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:02.955 12:11:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.215 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.473 00:22:03.473 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.474 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.474 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.732 { 00:22:03.732 "cntlid": 105, 00:22:03.732 "qid": 0, 00:22:03.732 "state": "enabled", 00:22:03.732 "thread": "nvmf_tgt_poll_group_000", 00:22:03.732 "listen_address": { 00:22:03.732 "trtype": "TCP", 00:22:03.732 "adrfam": "IPv4", 00:22:03.732 "traddr": "10.0.0.2", 00:22:03.732 "trsvcid": "4420" 00:22:03.732 }, 00:22:03.732 "peer_address": { 00:22:03.732 "trtype": "TCP", 00:22:03.732 "adrfam": "IPv4", 00:22:03.732 "traddr": "10.0.0.1", 00:22:03.732 "trsvcid": "37566" 00:22:03.732 }, 00:22:03.732 "auth": { 00:22:03.732 "state": "completed", 00:22:03.732 "digest": "sha512", 00:22:03.732 "dhgroup": "ffdhe2048" 00:22:03.732 } 00:22:03.732 } 00:22:03.732 ]' 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.732 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.990 12:11:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:22:04.557 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.558 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.558 12:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.558 12:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.558 12:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.558 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.558 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:04.558 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.817 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.817 00:22:05.076 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.076 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.076 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.076 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.076 12:11:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.076 12:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.076 12:11:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.076 12:11:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.076 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.076 { 00:22:05.076 "cntlid": 107, 00:22:05.076 "qid": 0, 00:22:05.076 "state": "enabled", 00:22:05.076 "thread": "nvmf_tgt_poll_group_000", 00:22:05.076 "listen_address": { 00:22:05.076 "trtype": "TCP", 00:22:05.076 "adrfam": "IPv4", 00:22:05.076 "traddr": "10.0.0.2", 00:22:05.076 "trsvcid": "4420" 00:22:05.076 }, 00:22:05.076 "peer_address": { 00:22:05.076 "trtype": "TCP", 00:22:05.076 "adrfam": "IPv4", 00:22:05.076 "traddr": "10.0.0.1", 00:22:05.076 "trsvcid": "37596" 00:22:05.076 }, 00:22:05.076 "auth": { 00:22:05.076 "state": "completed", 00:22:05.076 "digest": "sha512", 00:22:05.076 "dhgroup": "ffdhe2048" 00:22:05.076 } 00:22:05.076 } 00:22:05.076 ]' 00:22:05.076 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.076 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.076 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.336 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:05.336 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.336 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.336 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.336 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.336 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:22:05.903 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.903 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.903 12:11:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.903 12:11:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.161 12:11:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.161 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.161 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:06.161 12:11:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.161 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.162 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.419 00:22:06.419 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.420 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.420 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.677 { 00:22:06.677 "cntlid": 109, 00:22:06.677 "qid": 0, 00:22:06.677 "state": "enabled", 00:22:06.677 "thread": "nvmf_tgt_poll_group_000", 00:22:06.677 "listen_address": { 00:22:06.677 "trtype": "TCP", 00:22:06.677 "adrfam": "IPv4", 00:22:06.677 "traddr": "10.0.0.2", 00:22:06.677 "trsvcid": "4420" 00:22:06.677 }, 00:22:06.677 "peer_address": { 00:22:06.677 "trtype": "TCP", 00:22:06.677 "adrfam": "IPv4", 00:22:06.677 "traddr": "10.0.0.1", 00:22:06.677 "trsvcid": "37636" 00:22:06.677 }, 00:22:06.677 "auth": { 00:22:06.677 "state": "completed", 00:22:06.677 "digest": "sha512", 00:22:06.677 "dhgroup": "ffdhe2048" 00:22:06.677 } 00:22:06.677 } 00:22:06.677 ]' 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.677 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.935 12:11:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:22:07.501 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.501 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.501 12:11:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.501 12:11:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.501 12:11:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.501 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.501 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:07.501 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.795 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.053 00:22:08.053 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.053 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.053 12:11:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.053 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.053 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.053 12:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.053 12:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.053 12:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.053 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.053 { 00:22:08.053 "cntlid": 111, 00:22:08.053 "qid": 0, 00:22:08.053 "state": "enabled", 00:22:08.053 "thread": "nvmf_tgt_poll_group_000", 00:22:08.053 "listen_address": { 00:22:08.053 "trtype": "TCP", 00:22:08.053 "adrfam": "IPv4", 00:22:08.053 "traddr": "10.0.0.2", 00:22:08.053 "trsvcid": "4420" 00:22:08.053 }, 00:22:08.053 "peer_address": { 00:22:08.053 "trtype": "TCP", 00:22:08.053 "adrfam": "IPv4", 00:22:08.053 "traddr": "10.0.0.1", 00:22:08.053 "trsvcid": "52174" 00:22:08.053 }, 00:22:08.053 "auth": { 00:22:08.053 "state": "completed", 00:22:08.053 "digest": "sha512", 00:22:08.053 "dhgroup": "ffdhe2048" 00:22:08.053 } 00:22:08.053 } 00:22:08.053 ]' 00:22:08.053 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.310 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.310 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.310 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:08.310 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.310 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.310 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.311 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.568 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:22:09.136 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.136 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.136 12:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.136 12:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.136 12:11:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.136 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.136 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.136 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.136 12:11:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.136 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.396 00:22:09.396 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.396 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.396 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.655 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.655 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.655 12:11:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.655 12:11:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.655 12:11:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.655 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.655 { 00:22:09.655 "cntlid": 113, 00:22:09.655 "qid": 0, 00:22:09.655 "state": "enabled", 00:22:09.655 "thread": "nvmf_tgt_poll_group_000", 00:22:09.655 "listen_address": { 00:22:09.655 "trtype": "TCP", 00:22:09.655 "adrfam": "IPv4", 00:22:09.655 "traddr": "10.0.0.2", 00:22:09.655 "trsvcid": "4420" 00:22:09.655 }, 00:22:09.655 "peer_address": { 00:22:09.655 "trtype": "TCP", 00:22:09.655 "adrfam": "IPv4", 00:22:09.655 "traddr": "10.0.0.1", 00:22:09.655 "trsvcid": "52202" 00:22:09.656 }, 00:22:09.656 "auth": { 00:22:09.656 "state": "completed", 00:22:09.656 "digest": "sha512", 00:22:09.656 "dhgroup": "ffdhe3072" 00:22:09.656 } 00:22:09.656 } 00:22:09.656 ]' 00:22:09.656 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.656 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.656 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.656 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:09.656 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.915 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.915 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.915 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.915 12:11:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:22:10.482 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.482 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:10.482 12:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.482 12:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.482 12:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.482 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.482 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.483 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:10.741 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:10.741 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.741 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:10.741 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:10.741 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:10.742 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.742 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.742 12:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.742 12:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.742 12:12:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.742 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.742 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.000 00:22:11.000 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.000 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.000 12:12:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.260 { 00:22:11.260 "cntlid": 115, 00:22:11.260 "qid": 0, 00:22:11.260 "state": "enabled", 00:22:11.260 "thread": "nvmf_tgt_poll_group_000", 00:22:11.260 "listen_address": { 00:22:11.260 "trtype": "TCP", 00:22:11.260 "adrfam": "IPv4", 00:22:11.260 "traddr": "10.0.0.2", 00:22:11.260 "trsvcid": "4420" 00:22:11.260 }, 00:22:11.260 "peer_address": { 00:22:11.260 "trtype": "TCP", 00:22:11.260 "adrfam": "IPv4", 00:22:11.260 "traddr": "10.0.0.1", 00:22:11.260 "trsvcid": "52236" 00:22:11.260 }, 00:22:11.260 "auth": { 00:22:11.260 "state": "completed", 00:22:11.260 "digest": "sha512", 00:22:11.260 "dhgroup": "ffdhe3072" 00:22:11.260 } 00:22:11.260 } 00:22:11.260 ]' 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.260 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.518 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:22:12.085 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.085 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.085 12:12:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.085 12:12:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.085 12:12:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.085 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.085 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.085 12:12:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.345 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.604 00:22:12.604 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.604 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.604 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.604 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.604 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.604 12:12:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.604 12:12:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.604 12:12:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.863 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.863 { 00:22:12.863 "cntlid": 117, 00:22:12.863 "qid": 0, 00:22:12.863 "state": "enabled", 00:22:12.863 "thread": "nvmf_tgt_poll_group_000", 00:22:12.863 "listen_address": { 00:22:12.863 "trtype": "TCP", 00:22:12.863 "adrfam": "IPv4", 00:22:12.863 "traddr": "10.0.0.2", 00:22:12.863 "trsvcid": "4420" 00:22:12.863 }, 00:22:12.863 "peer_address": { 00:22:12.863 "trtype": "TCP", 00:22:12.863 "adrfam": "IPv4", 00:22:12.863 "traddr": "10.0.0.1", 00:22:12.863 "trsvcid": "52256" 00:22:12.863 }, 00:22:12.863 "auth": { 00:22:12.863 "state": "completed", 00:22:12.863 "digest": "sha512", 00:22:12.863 "dhgroup": "ffdhe3072" 00:22:12.863 } 00:22:12.863 } 00:22:12.863 ]' 00:22:12.863 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.863 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.863 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.863 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:12.863 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.863 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.863 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.863 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.122 12:12:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:13.697 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:13.956 00:22:13.956 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.956 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.956 12:12:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.215 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.215 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.215 12:12:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.215 12:12:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.215 12:12:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.215 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.215 { 00:22:14.215 "cntlid": 119, 00:22:14.215 "qid": 0, 00:22:14.215 "state": "enabled", 00:22:14.215 "thread": "nvmf_tgt_poll_group_000", 00:22:14.215 "listen_address": { 00:22:14.215 "trtype": "TCP", 00:22:14.215 "adrfam": "IPv4", 00:22:14.215 "traddr": "10.0.0.2", 00:22:14.215 "trsvcid": "4420" 00:22:14.215 }, 00:22:14.215 "peer_address": { 00:22:14.215 "trtype": "TCP", 00:22:14.215 "adrfam": "IPv4", 00:22:14.215 "traddr": "10.0.0.1", 00:22:14.215 "trsvcid": "52292" 00:22:14.215 }, 00:22:14.215 "auth": { 00:22:14.215 "state": "completed", 00:22:14.215 "digest": "sha512", 00:22:14.215 "dhgroup": "ffdhe3072" 00:22:14.215 } 00:22:14.215 } 00:22:14.215 ]' 00:22:14.215 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.215 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.215 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.508 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:14.508 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.508 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.508 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.508 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.508 12:12:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:22:15.076 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.076 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.076 12:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.076 12:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.076 12:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.076 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.076 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.077 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.077 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.336 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.595 00:22:15.595 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.595 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.595 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.855 { 00:22:15.855 "cntlid": 121, 00:22:15.855 "qid": 0, 00:22:15.855 "state": "enabled", 00:22:15.855 "thread": "nvmf_tgt_poll_group_000", 00:22:15.855 "listen_address": { 00:22:15.855 "trtype": "TCP", 00:22:15.855 "adrfam": "IPv4", 00:22:15.855 "traddr": "10.0.0.2", 00:22:15.855 "trsvcid": "4420" 00:22:15.855 }, 00:22:15.855 "peer_address": { 00:22:15.855 "trtype": "TCP", 00:22:15.855 "adrfam": "IPv4", 00:22:15.855 "traddr": "10.0.0.1", 00:22:15.855 "trsvcid": "52324" 00:22:15.855 }, 00:22:15.855 "auth": { 00:22:15.855 "state": "completed", 00:22:15.855 "digest": "sha512", 00:22:15.855 "dhgroup": "ffdhe4096" 00:22:15.855 } 00:22:15.855 } 00:22:15.855 ]' 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.855 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.115 12:12:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:22:16.683 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.683 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.683 12:12:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.683 12:12:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.683 12:12:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.683 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.683 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:16.683 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:16.941 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:16.941 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.941 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.941 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:16.941 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:16.942 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.942 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.942 12:12:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.942 12:12:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.942 12:12:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.942 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.942 12:12:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.201 00:22:17.201 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:17.201 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:17.201 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.461 { 00:22:17.461 "cntlid": 123, 00:22:17.461 "qid": 0, 00:22:17.461 "state": "enabled", 00:22:17.461 "thread": "nvmf_tgt_poll_group_000", 00:22:17.461 "listen_address": { 00:22:17.461 "trtype": "TCP", 00:22:17.461 "adrfam": "IPv4", 00:22:17.461 "traddr": "10.0.0.2", 00:22:17.461 "trsvcid": "4420" 00:22:17.461 }, 00:22:17.461 "peer_address": { 00:22:17.461 "trtype": "TCP", 00:22:17.461 "adrfam": "IPv4", 00:22:17.461 "traddr": "10.0.0.1", 00:22:17.461 "trsvcid": "54396" 00:22:17.461 }, 00:22:17.461 "auth": { 00:22:17.461 "state": "completed", 00:22:17.461 "digest": "sha512", 00:22:17.461 "dhgroup": "ffdhe4096" 00:22:17.461 } 00:22:17.461 } 00:22:17.461 ]' 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.461 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.721 12:12:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:22:18.289 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.289 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:18.289 12:12:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.289 12:12:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.289 12:12:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.289 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.289 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.289 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.548 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.807 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.807 { 00:22:18.807 "cntlid": 125, 00:22:18.807 "qid": 0, 00:22:18.807 "state": "enabled", 00:22:18.807 "thread": "nvmf_tgt_poll_group_000", 00:22:18.807 "listen_address": { 00:22:18.807 "trtype": "TCP", 00:22:18.807 "adrfam": "IPv4", 00:22:18.807 "traddr": "10.0.0.2", 00:22:18.807 "trsvcid": "4420" 00:22:18.807 }, 00:22:18.807 "peer_address": { 00:22:18.807 "trtype": "TCP", 00:22:18.807 "adrfam": "IPv4", 00:22:18.807 "traddr": "10.0.0.1", 00:22:18.807 "trsvcid": "54426" 00:22:18.807 }, 00:22:18.807 "auth": { 00:22:18.807 "state": "completed", 00:22:18.807 "digest": "sha512", 00:22:18.807 "dhgroup": "ffdhe4096" 00:22:18.807 } 00:22:18.807 } 00:22:18.807 ]' 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.807 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.066 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:19.066 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.066 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.066 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.066 12:12:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.066 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:22:19.634 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.634 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:19.634 12:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.634 12:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.893 12:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.893 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.893 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.893 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.893 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:19.893 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.894 12:12:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.153 00:22:20.153 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:20.153 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.153 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:20.413 { 00:22:20.413 "cntlid": 127, 00:22:20.413 "qid": 0, 00:22:20.413 "state": "enabled", 00:22:20.413 "thread": "nvmf_tgt_poll_group_000", 00:22:20.413 "listen_address": { 00:22:20.413 "trtype": "TCP", 00:22:20.413 "adrfam": "IPv4", 00:22:20.413 "traddr": "10.0.0.2", 00:22:20.413 "trsvcid": "4420" 00:22:20.413 }, 00:22:20.413 "peer_address": { 00:22:20.413 "trtype": "TCP", 00:22:20.413 "adrfam": "IPv4", 00:22:20.413 "traddr": "10.0.0.1", 00:22:20.413 "trsvcid": "54450" 00:22:20.413 }, 00:22:20.413 "auth": { 00:22:20.413 "state": "completed", 00:22:20.413 "digest": "sha512", 00:22:20.413 "dhgroup": "ffdhe4096" 00:22:20.413 } 00:22:20.413 } 00:22:20.413 ]' 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:20.413 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.672 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.672 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.672 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.672 12:12:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:22:21.241 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.241 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:21.241 12:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.241 12:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.241 12:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.241 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.241 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.241 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:21.241 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.499 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.757 00:22:21.757 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.757 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.757 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.014 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.014 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.014 12:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.014 12:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.014 12:12:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.014 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.014 { 00:22:22.014 "cntlid": 129, 00:22:22.014 "qid": 0, 00:22:22.014 "state": "enabled", 00:22:22.014 "thread": "nvmf_tgt_poll_group_000", 00:22:22.014 "listen_address": { 00:22:22.014 "trtype": "TCP", 00:22:22.014 "adrfam": "IPv4", 00:22:22.014 "traddr": "10.0.0.2", 00:22:22.014 "trsvcid": "4420" 00:22:22.014 }, 00:22:22.014 "peer_address": { 00:22:22.014 "trtype": "TCP", 00:22:22.014 "adrfam": "IPv4", 00:22:22.014 "traddr": "10.0.0.1", 00:22:22.014 "trsvcid": "54476" 00:22:22.014 }, 00:22:22.014 "auth": { 00:22:22.014 "state": "completed", 00:22:22.014 "digest": "sha512", 00:22:22.014 "dhgroup": "ffdhe6144" 00:22:22.014 } 00:22:22.014 } 00:22:22.014 ]' 00:22:22.014 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:22.014 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.014 12:12:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:22.014 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:22.014 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:22.272 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.272 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.272 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.272 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:22:22.841 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.841 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.841 12:12:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.841 12:12:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.841 12:12:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.841 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.841 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:22.841 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.100 12:12:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.359 00:22:23.359 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.359 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.359 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.620 { 00:22:23.620 "cntlid": 131, 00:22:23.620 "qid": 0, 00:22:23.620 "state": "enabled", 00:22:23.620 "thread": "nvmf_tgt_poll_group_000", 00:22:23.620 "listen_address": { 00:22:23.620 "trtype": "TCP", 00:22:23.620 "adrfam": "IPv4", 00:22:23.620 "traddr": "10.0.0.2", 00:22:23.620 "trsvcid": "4420" 00:22:23.620 }, 00:22:23.620 "peer_address": { 00:22:23.620 "trtype": "TCP", 00:22:23.620 "adrfam": "IPv4", 00:22:23.620 "traddr": "10.0.0.1", 00:22:23.620 "trsvcid": "54498" 00:22:23.620 }, 00:22:23.620 "auth": { 00:22:23.620 "state": "completed", 00:22:23.620 "digest": "sha512", 00:22:23.620 "dhgroup": "ffdhe6144" 00:22:23.620 } 00:22:23.620 } 00:22:23.620 ]' 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.620 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.919 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.919 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.919 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.919 12:12:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:22:24.487 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.487 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:24.487 12:12:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.487 12:12:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.487 12:12:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.487 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.487 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.487 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:24.746 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:24.746 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.746 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.746 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:24.746 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:24.746 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.747 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.747 12:12:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.747 12:12:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.747 12:12:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.747 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.747 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.005 00:22:25.005 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.005 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:25.005 12:12:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:25.265 { 00:22:25.265 "cntlid": 133, 00:22:25.265 "qid": 0, 00:22:25.265 "state": "enabled", 00:22:25.265 "thread": "nvmf_tgt_poll_group_000", 00:22:25.265 "listen_address": { 00:22:25.265 "trtype": "TCP", 00:22:25.265 "adrfam": "IPv4", 00:22:25.265 "traddr": "10.0.0.2", 00:22:25.265 "trsvcid": "4420" 00:22:25.265 }, 00:22:25.265 "peer_address": { 00:22:25.265 "trtype": "TCP", 00:22:25.265 "adrfam": "IPv4", 00:22:25.265 "traddr": "10.0.0.1", 00:22:25.265 "trsvcid": "54522" 00:22:25.265 }, 00:22:25.265 "auth": { 00:22:25.265 "state": "completed", 00:22:25.265 "digest": "sha512", 00:22:25.265 "dhgroup": "ffdhe6144" 00:22:25.265 } 00:22:25.265 } 00:22:25.265 ]' 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.265 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.524 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:22:26.092 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.092 12:12:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.092 12:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.092 12:12:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.092 12:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.092 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:26.092 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.092 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.361 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.362 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.625 00:22:26.625 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.625 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.625 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.883 { 00:22:26.883 "cntlid": 135, 00:22:26.883 "qid": 0, 00:22:26.883 "state": "enabled", 00:22:26.883 "thread": "nvmf_tgt_poll_group_000", 00:22:26.883 "listen_address": { 00:22:26.883 "trtype": "TCP", 00:22:26.883 "adrfam": "IPv4", 00:22:26.883 "traddr": "10.0.0.2", 00:22:26.883 "trsvcid": "4420" 00:22:26.883 }, 00:22:26.883 "peer_address": { 00:22:26.883 "trtype": "TCP", 00:22:26.883 "adrfam": "IPv4", 00:22:26.883 "traddr": "10.0.0.1", 00:22:26.883 "trsvcid": "54552" 00:22:26.883 }, 00:22:26.883 "auth": { 00:22:26.883 "state": "completed", 00:22:26.883 "digest": "sha512", 00:22:26.883 "dhgroup": "ffdhe6144" 00:22:26.883 } 00:22:26.883 } 00:22:26.883 ]' 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.883 12:12:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.142 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:22:27.709 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.709 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.709 12:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.709 12:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.709 12:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.709 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.709 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.709 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.709 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.968 12:12:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.536 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.536 { 00:22:28.536 "cntlid": 137, 00:22:28.536 "qid": 0, 00:22:28.536 "state": "enabled", 00:22:28.536 "thread": "nvmf_tgt_poll_group_000", 00:22:28.536 "listen_address": { 00:22:28.536 "trtype": "TCP", 00:22:28.536 "adrfam": "IPv4", 00:22:28.536 "traddr": "10.0.0.2", 00:22:28.536 "trsvcid": "4420" 00:22:28.536 }, 00:22:28.536 "peer_address": { 00:22:28.536 "trtype": "TCP", 00:22:28.536 "adrfam": "IPv4", 00:22:28.536 "traddr": "10.0.0.1", 00:22:28.536 "trsvcid": "40028" 00:22:28.536 }, 00:22:28.536 "auth": { 00:22:28.536 "state": "completed", 00:22:28.536 "digest": "sha512", 00:22:28.536 "dhgroup": "ffdhe8192" 00:22:28.536 } 00:22:28.536 } 00:22:28.536 ]' 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.536 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.794 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.794 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.794 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.794 12:12:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:22:29.359 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.359 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.359 12:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.359 12:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.359 12:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.359 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.359 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.359 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.617 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.183 00:22:30.183 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.183 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.183 12:12:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.442 { 00:22:30.442 "cntlid": 139, 00:22:30.442 "qid": 0, 00:22:30.442 "state": "enabled", 00:22:30.442 "thread": "nvmf_tgt_poll_group_000", 00:22:30.442 "listen_address": { 00:22:30.442 "trtype": "TCP", 00:22:30.442 "adrfam": "IPv4", 00:22:30.442 "traddr": "10.0.0.2", 00:22:30.442 "trsvcid": "4420" 00:22:30.442 }, 00:22:30.442 "peer_address": { 00:22:30.442 "trtype": "TCP", 00:22:30.442 "adrfam": "IPv4", 00:22:30.442 "traddr": "10.0.0.1", 00:22:30.442 "trsvcid": "40038" 00:22:30.442 }, 00:22:30.442 "auth": { 00:22:30.442 "state": "completed", 00:22:30.442 "digest": "sha512", 00:22:30.442 "dhgroup": "ffdhe8192" 00:22:30.442 } 00:22:30.442 } 00:22:30.442 ]' 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.442 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.700 12:12:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTVlMTQ5NmYxM2MzODNmNWRkZDhkZTBkZjNiODNmYjfrql76: --dhchap-ctrl-secret DHHC-1:02:ZmYzOWQ1YzY3NTQwODRmNWU0MThlYjcxMzIxYjNiMzZjNzRjMTYzYWY5YmEwM2M4qEUteQ==: 00:22:31.268 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.268 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:31.268 12:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.268 12:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.268 12:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.268 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.268 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.268 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.269 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:31.269 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.269 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:31.269 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:31.269 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:31.269 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.269 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.269 12:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.269 12:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.528 12:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.528 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.528 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.786 00:22:31.786 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:31.786 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:31.786 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.045 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.045 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.045 12:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.045 12:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.045 12:12:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.045 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.045 { 00:22:32.045 "cntlid": 141, 00:22:32.045 "qid": 0, 00:22:32.045 "state": "enabled", 00:22:32.045 "thread": "nvmf_tgt_poll_group_000", 00:22:32.045 "listen_address": { 00:22:32.045 "trtype": "TCP", 00:22:32.045 "adrfam": "IPv4", 00:22:32.045 "traddr": "10.0.0.2", 00:22:32.045 "trsvcid": "4420" 00:22:32.045 }, 00:22:32.045 "peer_address": { 00:22:32.045 "trtype": "TCP", 00:22:32.045 "adrfam": "IPv4", 00:22:32.045 "traddr": "10.0.0.1", 00:22:32.045 "trsvcid": "40064" 00:22:32.045 }, 00:22:32.045 "auth": { 00:22:32.045 "state": "completed", 00:22:32.045 "digest": "sha512", 00:22:32.045 "dhgroup": "ffdhe8192" 00:22:32.045 } 00:22:32.045 } 00:22:32.045 ]' 00:22:32.045 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.045 12:12:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.045 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.045 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.045 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.308 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.308 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.308 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.308 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWJhMDM2Nzk0ZTk4MDQ2NmI2N2I5Y2U2MjhhMmY2ODJlODliNThmMDgzZWQzMzkzjBzbKA==: --dhchap-ctrl-secret DHHC-1:01:ZjE5YmRkZGU0MDE0ZDE3NmU4ODEzNTllYjQxYmE3MWNwPZBp: 00:22:32.874 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.874 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.874 12:12:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.874 12:12:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.875 12:12:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.875 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:32.875 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:32.875 12:12:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.132 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.695 00:22:33.695 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.695 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.695 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.952 { 00:22:33.952 "cntlid": 143, 00:22:33.952 "qid": 0, 00:22:33.952 "state": "enabled", 00:22:33.952 "thread": "nvmf_tgt_poll_group_000", 00:22:33.952 "listen_address": { 00:22:33.952 "trtype": "TCP", 00:22:33.952 "adrfam": "IPv4", 00:22:33.952 "traddr": "10.0.0.2", 00:22:33.952 "trsvcid": "4420" 00:22:33.952 }, 00:22:33.952 "peer_address": { 00:22:33.952 "trtype": "TCP", 00:22:33.952 "adrfam": "IPv4", 00:22:33.952 "traddr": "10.0.0.1", 00:22:33.952 "trsvcid": "40082" 00:22:33.952 }, 00:22:33.952 "auth": { 00:22:33.952 "state": "completed", 00:22:33.952 "digest": "sha512", 00:22:33.952 "dhgroup": "ffdhe8192" 00:22:33.952 } 00:22:33.952 } 00:22:33.952 ]' 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.952 12:12:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.210 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.774 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.032 12:12:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.291 00:22:35.291 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.291 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.548 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.548 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.548 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.548 12:12:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.548 12:12:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.549 12:12:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.549 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.549 { 00:22:35.549 "cntlid": 145, 00:22:35.549 "qid": 0, 00:22:35.549 "state": "enabled", 00:22:35.549 "thread": "nvmf_tgt_poll_group_000", 00:22:35.549 "listen_address": { 00:22:35.549 "trtype": "TCP", 00:22:35.549 "adrfam": "IPv4", 00:22:35.549 "traddr": "10.0.0.2", 00:22:35.549 "trsvcid": "4420" 00:22:35.549 }, 00:22:35.549 "peer_address": { 00:22:35.549 "trtype": "TCP", 00:22:35.549 "adrfam": "IPv4", 00:22:35.549 "traddr": "10.0.0.1", 00:22:35.549 "trsvcid": "40106" 00:22:35.549 }, 00:22:35.549 "auth": { 00:22:35.549 "state": "completed", 00:22:35.549 "digest": "sha512", 00:22:35.549 "dhgroup": "ffdhe8192" 00:22:35.549 } 00:22:35.549 } 00:22:35.549 ]' 00:22:35.549 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.549 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.549 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.807 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.807 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.807 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.807 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.807 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.064 12:12:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MDNkZjNmNDNkNWYxMDc2Yjk5Nzc5ZmVhZDk2NjhhYThlNTM1ZTVhNTNmN2Q4NTcz4RlHnQ==: --dhchap-ctrl-secret DHHC-1:03:OTQ5NjE1NjBmOWE1YWE2YzFjZjVhN2M0Yjg0NDUwNTVhNzA4MzU0NDgzYjQzYmU5OTg1ZWU2NDg5OTUzZWE0ZMb7VHE=: 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:36.630 12:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:36.889 request: 00:22:36.889 { 00:22:36.889 "name": "nvme0", 00:22:36.889 "trtype": "tcp", 00:22:36.889 "traddr": "10.0.0.2", 00:22:36.889 "adrfam": "ipv4", 00:22:36.889 "trsvcid": "4420", 00:22:36.889 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:36.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:36.889 "prchk_reftag": false, 00:22:36.889 "prchk_guard": false, 00:22:36.889 "hdgst": false, 00:22:36.889 "ddgst": false, 00:22:36.889 "dhchap_key": "key2", 00:22:36.889 "method": "bdev_nvme_attach_controller", 00:22:36.889 "req_id": 1 00:22:36.889 } 00:22:36.890 Got JSON-RPC error response 00:22:36.890 response: 00:22:36.890 { 00:22:36.890 "code": -5, 00:22:36.890 "message": "Input/output error" 00:22:36.890 } 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:36.890 12:12:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.459 request: 00:22:37.459 { 00:22:37.459 "name": "nvme0", 00:22:37.459 "trtype": "tcp", 00:22:37.459 "traddr": "10.0.0.2", 00:22:37.459 "adrfam": "ipv4", 00:22:37.459 "trsvcid": "4420", 00:22:37.459 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:37.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:37.459 "prchk_reftag": false, 00:22:37.459 "prchk_guard": false, 00:22:37.459 "hdgst": false, 00:22:37.459 "ddgst": false, 00:22:37.459 "dhchap_key": "key1", 00:22:37.459 "dhchap_ctrlr_key": "ckey2", 00:22:37.459 "method": "bdev_nvme_attach_controller", 00:22:37.459 "req_id": 1 00:22:37.459 } 00:22:37.459 Got JSON-RPC error response 00:22:37.459 response: 00:22:37.459 { 00:22:37.459 "code": -5, 00:22:37.459 "message": "Input/output error" 00:22:37.459 } 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.459 12:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.028 request: 00:22:38.028 { 00:22:38.028 "name": "nvme0", 00:22:38.028 "trtype": "tcp", 00:22:38.028 "traddr": "10.0.0.2", 00:22:38.028 "adrfam": "ipv4", 00:22:38.028 "trsvcid": "4420", 00:22:38.028 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:38.028 "prchk_reftag": false, 00:22:38.028 "prchk_guard": false, 00:22:38.028 "hdgst": false, 00:22:38.028 "ddgst": false, 00:22:38.028 "dhchap_key": "key1", 00:22:38.028 "dhchap_ctrlr_key": "ckey1", 00:22:38.028 "method": "bdev_nvme_attach_controller", 00:22:38.028 "req_id": 1 00:22:38.028 } 00:22:38.028 Got JSON-RPC error response 00:22:38.028 response: 00:22:38.028 { 00:22:38.028 "code": -5, 00:22:38.028 "message": "Input/output error" 00:22:38.028 } 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1156623 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1156623 ']' 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1156623 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1156623 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1156623' 00:22:38.028 killing process with pid 1156623 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1156623 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1156623 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:38.028 12:12:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1177264 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1177264 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1177264 ']' 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.029 12:12:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1177264 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1177264 ']' 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.288 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.547 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:39.114 00:22:39.114 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:39.114 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:39.114 12:12:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.373 { 00:22:39.373 "cntlid": 1, 00:22:39.373 "qid": 0, 00:22:39.373 "state": "enabled", 00:22:39.373 "thread": "nvmf_tgt_poll_group_000", 00:22:39.373 "listen_address": { 00:22:39.373 "trtype": "TCP", 00:22:39.373 "adrfam": "IPv4", 00:22:39.373 "traddr": "10.0.0.2", 00:22:39.373 "trsvcid": "4420" 00:22:39.373 }, 00:22:39.373 "peer_address": { 00:22:39.373 "trtype": "TCP", 00:22:39.373 "adrfam": "IPv4", 00:22:39.373 "traddr": "10.0.0.1", 00:22:39.373 "trsvcid": "45238" 00:22:39.373 }, 00:22:39.373 "auth": { 00:22:39.373 "state": "completed", 00:22:39.373 "digest": "sha512", 00:22:39.373 "dhgroup": "ffdhe8192" 00:22:39.373 } 00:22:39.373 } 00:22:39.373 ]' 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.373 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.632 12:12:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MzUzYmNiMzFhMzgwZTg5YjViY2JkMTVjMGQ4Y2ZhNDkxODU5ZGNlNTZjYmQ4YmUzZTdlNWZlMTZlMjc5MDUzZvkAD3Q=: 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:40.209 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:40.467 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.468 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:40.468 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.468 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:40.468 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.468 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:40.468 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.468 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.468 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.468 request: 00:22:40.468 { 00:22:40.468 "name": "nvme0", 00:22:40.468 "trtype": "tcp", 00:22:40.468 "traddr": "10.0.0.2", 00:22:40.468 "adrfam": "ipv4", 00:22:40.468 "trsvcid": "4420", 00:22:40.468 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:40.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:40.468 "prchk_reftag": false, 00:22:40.468 "prchk_guard": false, 00:22:40.468 "hdgst": false, 00:22:40.468 "ddgst": false, 00:22:40.468 "dhchap_key": "key3", 00:22:40.468 "method": "bdev_nvme_attach_controller", 00:22:40.468 "req_id": 1 00:22:40.468 } 00:22:40.468 Got JSON-RPC error response 00:22:40.468 response: 00:22:40.468 { 00:22:40.468 "code": -5, 00:22:40.468 "message": "Input/output error" 00:22:40.468 } 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.733 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:41.053 request: 00:22:41.053 { 00:22:41.053 "name": "nvme0", 00:22:41.053 "trtype": "tcp", 00:22:41.053 "traddr": "10.0.0.2", 00:22:41.053 "adrfam": "ipv4", 00:22:41.053 "trsvcid": "4420", 00:22:41.053 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:41.053 "prchk_reftag": false, 00:22:41.053 "prchk_guard": false, 00:22:41.053 "hdgst": false, 00:22:41.053 "ddgst": false, 00:22:41.053 "dhchap_key": "key3", 00:22:41.053 "method": "bdev_nvme_attach_controller", 00:22:41.053 "req_id": 1 00:22:41.053 } 00:22:41.053 Got JSON-RPC error response 00:22:41.053 response: 00:22:41.053 { 00:22:41.053 "code": -5, 00:22:41.053 "message": "Input/output error" 00:22:41.053 } 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:41.053 12:12:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:41.311 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:41.311 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.311 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.311 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.311 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:41.311 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.311 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.311 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.312 request: 00:22:41.312 { 00:22:41.312 "name": "nvme0", 00:22:41.312 "trtype": "tcp", 00:22:41.312 "traddr": "10.0.0.2", 00:22:41.312 "adrfam": "ipv4", 00:22:41.312 "trsvcid": "4420", 00:22:41.312 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:41.312 "prchk_reftag": false, 00:22:41.312 "prchk_guard": false, 00:22:41.312 "hdgst": false, 00:22:41.312 "ddgst": false, 00:22:41.312 "dhchap_key": "key0", 00:22:41.312 "dhchap_ctrlr_key": "key1", 00:22:41.312 "method": "bdev_nvme_attach_controller", 00:22:41.312 "req_id": 1 00:22:41.312 } 00:22:41.312 Got JSON-RPC error response 00:22:41.312 response: 00:22:41.312 { 00:22:41.312 "code": -5, 00:22:41.312 "message": "Input/output error" 00:22:41.312 } 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:41.312 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:41.571 00:22:41.571 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:41.571 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:41.571 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.830 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.831 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.831 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1156666 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1156666 ']' 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1156666 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1156666 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1156666' 00:22:42.090 killing process with pid 1156666 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1156666 00:22:42.090 12:12:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1156666 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.349 rmmod nvme_tcp 00:22:42.349 rmmod nvme_fabrics 00:22:42.349 rmmod nvme_keyring 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1177264 ']' 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1177264 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1177264 ']' 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1177264 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1177264 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1177264' 00:22:42.349 killing process with pid 1177264 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1177264 00:22:42.349 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1177264 00:22:42.608 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.608 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.608 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.608 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.608 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.608 12:12:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.608 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.608 12:12:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.142 12:12:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.142 12:12:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Ulj /tmp/spdk.key-sha256.omC /tmp/spdk.key-sha384.jGf /tmp/spdk.key-sha512.SvK /tmp/spdk.key-sha512.PLL /tmp/spdk.key-sha384.SlD /tmp/spdk.key-sha256.mNk '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:45.142 00:22:45.142 real 2m11.292s 00:22:45.142 user 5m1.908s 00:22:45.142 sys 0m21.056s 00:22:45.142 12:12:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:45.142 12:12:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.142 ************************************ 00:22:45.142 END TEST nvmf_auth_target 00:22:45.142 ************************************ 00:22:45.142 12:12:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:45.142 12:12:34 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:45.142 12:12:34 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:45.142 12:12:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:45.142 12:12:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.142 12:12:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.142 ************************************ 00:22:45.142 START TEST nvmf_bdevio_no_huge 00:22:45.142 ************************************ 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:45.142 * Looking for test storage... 00:22:45.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.142 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:45.143 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:45.143 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:45.143 12:12:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.420 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:50.421 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:50.421 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:50.421 Found net devices under 0000:86:00.0: cvl_0_0 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:50.421 Found net devices under 0000:86:00.1: cvl_0_1 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.421 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:50.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:22:50.421 00:22:50.421 --- 10.0.0.2 ping statistics --- 00:22:50.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.421 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:50.681 00:22:50.681 --- 10.0.0.1 ping statistics --- 00:22:50.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.681 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1181412 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1181412 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1181412 ']' 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.681 12:12:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:50.681 [2024-07-15 12:12:40.523705] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:22:50.681 [2024-07-15 12:12:40.523752] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:50.681 [2024-07-15 12:12:40.597913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.681 [2024-07-15 12:12:40.662135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.681 [2024-07-15 12:12:40.662173] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.681 [2024-07-15 12:12:40.662180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.681 [2024-07-15 12:12:40.662186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.681 [2024-07-15 12:12:40.662191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.681 [2024-07-15 12:12:40.662319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:50.681 [2024-07-15 12:12:40.662436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:50.681 [2024-07-15 12:12:40.662520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.681 [2024-07-15 12:12:40.662521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.619 [2024-07-15 12:12:41.370505] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.619 Malloc0 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:51.619 [2024-07-15 12:12:41.414800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:51.619 { 00:22:51.619 "params": { 00:22:51.619 "name": "Nvme$subsystem", 00:22:51.619 "trtype": "$TEST_TRANSPORT", 00:22:51.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.619 "adrfam": "ipv4", 00:22:51.619 "trsvcid": "$NVMF_PORT", 00:22:51.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.619 "hdgst": ${hdgst:-false}, 00:22:51.619 "ddgst": ${ddgst:-false} 00:22:51.619 }, 00:22:51.619 "method": "bdev_nvme_attach_controller" 00:22:51.619 } 00:22:51.619 EOF 00:22:51.619 )") 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:51.619 12:12:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:51.619 "params": { 00:22:51.619 "name": "Nvme1", 00:22:51.619 "trtype": "tcp", 00:22:51.619 "traddr": "10.0.0.2", 00:22:51.619 "adrfam": "ipv4", 00:22:51.619 "trsvcid": "4420", 00:22:51.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.619 "hdgst": false, 00:22:51.619 "ddgst": false 00:22:51.619 }, 00:22:51.619 "method": "bdev_nvme_attach_controller" 00:22:51.619 }' 00:22:51.619 [2024-07-15 12:12:41.465997] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:22:51.619 [2024-07-15 12:12:41.466042] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1181660 ] 00:22:51.619 [2024-07-15 12:12:41.533117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:51.619 [2024-07-15 12:12:41.599539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.619 [2024-07-15 12:12:41.599643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.619 [2024-07-15 12:12:41.599644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.879 I/O targets: 00:22:51.879 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:51.879 00:22:51.879 00:22:51.879 CUnit - A unit testing framework for C - Version 2.1-3 00:22:51.879 http://cunit.sourceforge.net/ 00:22:51.879 00:22:51.879 00:22:51.879 Suite: bdevio tests on: Nvme1n1 00:22:52.138 Test: blockdev write read block ...passed 00:22:52.138 Test: blockdev write zeroes read block ...passed 00:22:52.138 Test: blockdev write zeroes read no split ...passed 00:22:52.138 Test: blockdev write zeroes read split ...passed 00:22:52.138 Test: blockdev write zeroes read split partial ...passed 00:22:52.138 Test: blockdev reset ...[2024-07-15 12:12:42.072998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.138 [2024-07-15 12:12:42.073061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e9e80 (9): Bad file descriptor 00:22:52.397 [2024-07-15 12:12:42.143272] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:52.397 passed 00:22:52.397 Test: blockdev write read 8 blocks ...passed 00:22:52.397 Test: blockdev write read size > 128k ...passed 00:22:52.397 Test: blockdev write read invalid size ...passed 00:22:52.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:52.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:52.397 Test: blockdev write read max offset ...passed 00:22:52.397 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:52.397 Test: blockdev writev readv 8 blocks ...passed 00:22:52.397 Test: blockdev writev readv 30 x 1block ...passed 00:22:52.397 Test: blockdev writev readv block ...passed 00:22:52.397 Test: blockdev writev readv size > 128k ...passed 00:22:52.397 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:52.397 Test: blockdev comparev and writev ...[2024-07-15 12:12:42.355518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:52.397 [2024-07-15 12:12:42.355546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.397 [2024-07-15 12:12:42.355559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:52.397 [2024-07-15 12:12:42.355566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.397 [2024-07-15 12:12:42.355814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:52.397 [2024-07-15 12:12:42.355824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.397 [2024-07-15 12:12:42.355836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:52.397 [2024-07-15 12:12:42.355842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.397 [2024-07-15 12:12:42.356089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:52.397 [2024-07-15 12:12:42.356098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.397 [2024-07-15 12:12:42.356109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:52.397 [2024-07-15 12:12:42.356115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.397 [2024-07-15 12:12:42.356365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:52.397 [2024-07-15 12:12:42.356375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.397 [2024-07-15 12:12:42.356386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:52.397 [2024-07-15 12:12:42.356393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.397 passed 00:22:52.657 Test: blockdev nvme passthru rw ...passed 00:22:52.657 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:12:42.438545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.657 [2024-07-15 12:12:42.438563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.657 [2024-07-15 12:12:42.438689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.657 [2024-07-15 12:12:42.438699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.657 [2024-07-15 12:12:42.438818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.657 [2024-07-15 12:12:42.438827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.657 [2024-07-15 12:12:42.438945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.657 [2024-07-15 12:12:42.438954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.657 passed 00:22:52.657 Test: blockdev nvme admin passthru ...passed 00:22:52.657 Test: blockdev copy ...passed 00:22:52.657 00:22:52.657 Run Summary: Type Total Ran Passed Failed Inactive 00:22:52.657 suites 1 1 n/a 0 0 00:22:52.657 tests 23 23 23 0 0 00:22:52.657 asserts 152 152 152 0 n/a 00:22:52.657 00:22:52.657 Elapsed time = 1.267 seconds 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.917 rmmod nvme_tcp 00:22:52.917 rmmod nvme_fabrics 00:22:52.917 rmmod nvme_keyring 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1181412 ']' 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1181412 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1181412 ']' 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1181412 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181412 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181412' 00:22:52.917 killing process with pid 1181412 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1181412 00:22:52.917 12:12:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1181412 00:22:53.484 12:12:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.484 12:12:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.484 12:12:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.484 12:12:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.484 12:12:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.484 12:12:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.484 12:12:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.484 12:12:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.386 12:12:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.386 00:22:55.386 real 0m10.650s 00:22:55.386 user 0m13.954s 00:22:55.386 sys 0m5.237s 00:22:55.386 12:12:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:55.386 12:12:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:55.386 ************************************ 00:22:55.386 END TEST nvmf_bdevio_no_huge 00:22:55.386 ************************************ 00:22:55.386 12:12:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:55.386 12:12:45 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:55.386 12:12:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:55.386 12:12:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.386 12:12:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.386 ************************************ 00:22:55.386 START TEST nvmf_tls 00:22:55.386 ************************************ 00:22:55.386 12:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:55.646 * Looking for test storage... 00:22:55.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.646 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.647 12:12:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:00.943 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.944 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.944 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.944 12:12:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:23:01.204 00:23:01.204 --- 10.0.0.2 ping statistics --- 00:23:01.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.204 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:23:01.204 00:23:01.204 --- 10.0.0.1 ping statistics --- 00:23:01.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.204 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1185363 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1185363 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1185363 ']' 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.204 12:12:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.463 [2024-07-15 12:12:51.246467] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:01.463 [2024-07-15 12:12:51.246514] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.463 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.463 [2024-07-15 12:12:51.321129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.463 [2024-07-15 12:12:51.361393] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.463 [2024-07-15 12:12:51.361431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.463 [2024-07-15 12:12:51.361438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.463 [2024-07-15 12:12:51.361444] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.463 [2024-07-15 12:12:51.361450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.463 [2024-07-15 12:12:51.361474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:02.400 true 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:02.400 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:02.659 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:02.659 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:02.659 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:02.659 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:02.659 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:02.918 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:02.918 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:02.918 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:03.177 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:03.177 12:12:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:03.177 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:03.177 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:03.177 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:03.177 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:03.434 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:03.434 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:03.434 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:03.692 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:03.692 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:03.692 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:03.692 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:03.692 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:03.951 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:03.951 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:04.250 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:04.250 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:04.250 12:12:53 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:04.250 12:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:04.250 12:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:04.250 12:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:04.250 12:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:04.250 12:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:04.250 12:12:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.rCSzY95PFu 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.hA2BlCveIa 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.rCSzY95PFu 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hA2BlCveIa 00:23:04.250 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:04.533 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:04.533 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.rCSzY95PFu 00:23:04.533 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rCSzY95PFu 00:23:04.533 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:04.793 [2024-07-15 12:12:54.638197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.793 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:05.052 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:05.052 [2024-07-15 12:12:54.983083] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.052 [2024-07-15 12:12:54.983282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.052 12:12:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:05.312 malloc0 00:23:05.312 12:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:05.570 12:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rCSzY95PFu 00:23:05.570 [2024-07-15 12:12:55.496583] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:05.570 12:12:55 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.rCSzY95PFu 00:23:05.570 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.783 Initializing NVMe Controllers 00:23:17.783 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:17.783 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:17.783 Initialization complete. Launching workers. 00:23:17.783 ======================================================== 00:23:17.783 Latency(us) 00:23:17.783 Device Information : IOPS MiB/s Average min max 00:23:17.783 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16552.67 64.66 3866.88 842.41 7104.51 00:23:17.783 ======================================================== 00:23:17.783 Total : 16552.67 64.66 3866.88 842.41 7104.51 00:23:17.783 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rCSzY95PFu 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rCSzY95PFu' 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1187760 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1187760 /var/tmp/bdevperf.sock 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1187760 ']' 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.783 [2024-07-15 12:13:05.665113] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:17.783 [2024-07-15 12:13:05.665162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187760 ] 00:23:17.783 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.783 [2024-07-15 12:13:05.730936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.783 [2024-07-15 12:13:05.771614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.783 12:13:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:17.784 12:13:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rCSzY95PFu 00:23:17.784 [2024-07-15 12:13:06.010810] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.784 [2024-07-15 12:13:06.010874] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:17.784 TLSTESTn1 00:23:17.784 12:13:06 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.784 Running I/O for 10 seconds... 00:23:27.758 00:23:27.758 Latency(us) 00:23:27.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.758 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.758 Verification LBA range: start 0x0 length 0x2000 00:23:27.758 TLSTESTn1 : 10.02 5492.67 21.46 0.00 0.00 23266.92 4729.99 61090.95 00:23:27.758 =================================================================================================================== 00:23:27.758 Total : 5492.67 21.46 0.00 0.00 23266.92 4729.99 61090.95 00:23:27.758 0 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1187760 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1187760 ']' 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1187760 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1187760 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1187760' 00:23:27.758 killing process with pid 1187760 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1187760 00:23:27.758 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.758 00:23:27.758 Latency(us) 00:23:27.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.758 =================================================================================================================== 00:23:27.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.758 [2024-07-15 12:13:16.291003] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1187760 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hA2BlCveIa 00:23:27.758 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hA2BlCveIa 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hA2BlCveIa 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hA2BlCveIa' 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1189369 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1189369 /var/tmp/bdevperf.sock 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1189369 ']' 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.759 [2024-07-15 12:13:16.511095] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:27.759 [2024-07-15 12:13:16.511146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189369 ] 00:23:27.759 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.759 [2024-07-15 12:13:16.577989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.759 [2024-07-15 12:13:16.614877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hA2BlCveIa 00:23:27.759 [2024-07-15 12:13:16.866691] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.759 [2024-07-15 12:13:16.866773] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:27.759 [2024-07-15 12:13:16.877639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:27.759 [2024-07-15 12:13:16.878086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ff0 (107): Transport endpoint is not connected 00:23:27.759 [2024-07-15 12:13:16.879078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x917ff0 (9): Bad file descriptor 00:23:27.759 [2024-07-15 12:13:16.880080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:27.759 [2024-07-15 12:13:16.880089] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:27.759 [2024-07-15 12:13:16.880098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:27.759 request: 00:23:27.759 { 00:23:27.759 "name": "TLSTEST", 00:23:27.759 "trtype": "tcp", 00:23:27.759 "traddr": "10.0.0.2", 00:23:27.759 "adrfam": "ipv4", 00:23:27.759 "trsvcid": "4420", 00:23:27.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.759 "prchk_reftag": false, 00:23:27.759 "prchk_guard": false, 00:23:27.759 "hdgst": false, 00:23:27.759 "ddgst": false, 00:23:27.759 "psk": "/tmp/tmp.hA2BlCveIa", 00:23:27.759 "method": "bdev_nvme_attach_controller", 00:23:27.759 "req_id": 1 00:23:27.759 } 00:23:27.759 Got JSON-RPC error response 00:23:27.759 response: 00:23:27.759 { 00:23:27.759 "code": -5, 00:23:27.759 "message": "Input/output error" 00:23:27.759 } 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1189369 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1189369 ']' 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1189369 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1189369 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1189369' 00:23:27.759 killing process with pid 1189369 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1189369 00:23:27.759 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.759 00:23:27.759 Latency(us) 00:23:27.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.759 =================================================================================================================== 00:23:27.759 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.759 [2024-07-15 12:13:16.952094] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:27.759 12:13:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1189369 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rCSzY95PFu 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rCSzY95PFu 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.rCSzY95PFu 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rCSzY95PFu' 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1189601 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1189601 /var/tmp/bdevperf.sock 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1189601 ']' 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.759 [2024-07-15 12:13:17.166147] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:27.759 [2024-07-15 12:13:17.166199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189601 ] 00:23:27.759 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.759 [2024-07-15 12:13:17.233697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.759 [2024-07-15 12:13:17.269938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:27.759 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.rCSzY95PFu 00:23:27.759 [2024-07-15 12:13:17.513049] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.759 [2024-07-15 12:13:17.513144] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:27.759 [2024-07-15 12:13:17.519190] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:27.759 [2024-07-15 12:13:17.519212] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:27.759 [2024-07-15 12:13:17.519244] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:27.759 [2024-07-15 12:13:17.519380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352ff0 (107): Transport endpoint is not connected 00:23:27.759 [2024-07-15 12:13:17.520372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2352ff0 (9): Bad file descriptor 00:23:27.759 [2024-07-15 12:13:17.521374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:27.759 [2024-07-15 12:13:17.521383] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:27.759 [2024-07-15 12:13:17.521391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:27.759 request: 00:23:27.759 { 00:23:27.760 "name": "TLSTEST", 00:23:27.760 "trtype": "tcp", 00:23:27.760 "traddr": "10.0.0.2", 00:23:27.760 "adrfam": "ipv4", 00:23:27.760 "trsvcid": "4420", 00:23:27.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.760 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:27.760 "prchk_reftag": false, 00:23:27.760 "prchk_guard": false, 00:23:27.760 "hdgst": false, 00:23:27.760 "ddgst": false, 00:23:27.760 "psk": "/tmp/tmp.rCSzY95PFu", 00:23:27.760 "method": "bdev_nvme_attach_controller", 00:23:27.760 "req_id": 1 00:23:27.760 } 00:23:27.760 Got JSON-RPC error response 00:23:27.760 response: 00:23:27.760 { 00:23:27.760 "code": -5, 00:23:27.760 "message": "Input/output error" 00:23:27.760 } 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1189601 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1189601 ']' 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1189601 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1189601 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1189601' 00:23:27.760 killing process with pid 1189601 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1189601 00:23:27.760 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.760 00:23:27.760 Latency(us) 00:23:27.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.760 =================================================================================================================== 00:23:27.760 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.760 [2024-07-15 12:13:17.595438] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1189601 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rCSzY95PFu 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rCSzY95PFu 00:23:27.760 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.rCSzY95PFu 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rCSzY95PFu' 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1189619 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1189619 /var/tmp/bdevperf.sock 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1189619 ']' 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.018 [2024-07-15 12:13:17.805451] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:28.018 [2024-07-15 12:13:17.805499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189619 ] 00:23:28.018 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.018 [2024-07-15 12:13:17.871658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.018 [2024-07-15 12:13:17.907484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:28.018 12:13:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rCSzY95PFu 00:23:28.277 [2024-07-15 12:13:18.158530] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.277 [2024-07-15 12:13:18.158609] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.277 [2024-07-15 12:13:18.163140] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:28.277 [2024-07-15 12:13:18.163160] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:28.277 [2024-07-15 12:13:18.163181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:28.277 [2024-07-15 12:13:18.163846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eafff0 (107): Transport endpoint is not connected 00:23:28.278 [2024-07-15 12:13:18.164836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eafff0 (9): Bad file descriptor 00:23:28.278 [2024-07-15 12:13:18.165840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:28.278 [2024-07-15 12:13:18.165850] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:28.278 [2024-07-15 12:13:18.165859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:28.278 request: 00:23:28.278 { 00:23:28.278 "name": "TLSTEST", 00:23:28.278 "trtype": "tcp", 00:23:28.278 "traddr": "10.0.0.2", 00:23:28.278 "adrfam": "ipv4", 00:23:28.278 "trsvcid": "4420", 00:23:28.278 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:28.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.278 "prchk_reftag": false, 00:23:28.278 "prchk_guard": false, 00:23:28.278 "hdgst": false, 00:23:28.278 "ddgst": false, 00:23:28.278 "psk": "/tmp/tmp.rCSzY95PFu", 00:23:28.278 "method": "bdev_nvme_attach_controller", 00:23:28.278 "req_id": 1 00:23:28.278 } 00:23:28.278 Got JSON-RPC error response 00:23:28.278 response: 00:23:28.278 { 00:23:28.278 "code": -5, 00:23:28.278 "message": "Input/output error" 00:23:28.278 } 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1189619 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1189619 ']' 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1189619 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1189619 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1189619' 00:23:28.278 killing process with pid 1189619 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1189619 00:23:28.278 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.278 00:23:28.278 Latency(us) 00:23:28.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.278 =================================================================================================================== 00:23:28.278 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.278 [2024-07-15 12:13:18.235396] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:28.278 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1189619 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1189840 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1189840 /var/tmp/bdevperf.sock 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1189840 ']' 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.537 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.537 [2024-07-15 12:13:18.451142] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:28.537 [2024-07-15 12:13:18.451188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189840 ] 00:23:28.537 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.537 [2024-07-15 12:13:18.519005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.795 [2024-07-15 12:13:18.557732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.795 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.795 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:28.795 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:29.053 [2024-07-15 12:13:18.812526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:29.053 [2024-07-15 12:13:18.814276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c65e0 (9): Bad file descriptor 00:23:29.054 [2024-07-15 12:13:18.815277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:29.054 [2024-07-15 12:13:18.815286] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:29.054 [2024-07-15 12:13:18.815295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:29.054 request: 00:23:29.054 { 00:23:29.054 "name": "TLSTEST", 00:23:29.054 "trtype": "tcp", 00:23:29.054 "traddr": "10.0.0.2", 00:23:29.054 "adrfam": "ipv4", 00:23:29.054 "trsvcid": "4420", 00:23:29.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.054 "prchk_reftag": false, 00:23:29.054 "prchk_guard": false, 00:23:29.054 "hdgst": false, 00:23:29.054 "ddgst": false, 00:23:29.054 "method": "bdev_nvme_attach_controller", 00:23:29.054 "req_id": 1 00:23:29.054 } 00:23:29.054 Got JSON-RPC error response 00:23:29.054 response: 00:23:29.054 { 00:23:29.054 "code": -5, 00:23:29.054 "message": "Input/output error" 00:23:29.054 } 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1189840 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1189840 ']' 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1189840 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1189840 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1189840' 00:23:29.054 killing process with pid 1189840 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1189840 00:23:29.054 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.054 00:23:29.054 Latency(us) 00:23:29.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.054 =================================================================================================================== 00:23:29.054 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.054 12:13:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1189840 00:23:29.054 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:29.054 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:29.054 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:29.054 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:29.054 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:29.054 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1185363 00:23:29.054 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1185363 ']' 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1185363 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1185363 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1185363' 00:23:29.313 killing process with pid 1185363 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1185363 00:23:29.313 [2024-07-15 12:13:19.102161] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1185363 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:29.313 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Lplbe1aQQ2 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Lplbe1aQQ2 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1189872 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1189872 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1189872 ']' 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.572 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.572 [2024-07-15 12:13:19.390601] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:29.572 [2024-07-15 12:13:19.390647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.572 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.572 [2024-07-15 12:13:19.460537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.572 [2024-07-15 12:13:19.500070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.572 [2024-07-15 12:13:19.500110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.572 [2024-07-15 12:13:19.500117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.572 [2024-07-15 12:13:19.500123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.572 [2024-07-15 12:13:19.500128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.572 [2024-07-15 12:13:19.500144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.830 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.830 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:29.830 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.830 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:29.830 12:13:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.830 12:13:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.830 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Lplbe1aQQ2 00:23:29.831 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Lplbe1aQQ2 00:23:29.831 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:29.831 [2024-07-15 12:13:19.780424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.831 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:30.090 12:13:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:30.349 [2024-07-15 12:13:20.129309] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.349 [2024-07-15 12:13:20.129508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.349 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:30.349 malloc0 00:23:30.349 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:30.607 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lplbe1aQQ2 00:23:30.866 [2024-07-15 12:13:20.650812] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lplbe1aQQ2 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Lplbe1aQQ2' 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1190129 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1190129 /var/tmp/bdevperf.sock 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1190129 ']' 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.866 12:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.866 [2024-07-15 12:13:20.705623] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:30.866 [2024-07-15 12:13:20.705668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190129 ] 00:23:30.866 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.866 [2024-07-15 12:13:20.773276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.867 [2024-07-15 12:13:20.812199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.126 12:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.126 12:13:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:31.126 12:13:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lplbe1aQQ2 00:23:31.126 [2024-07-15 12:13:21.060443] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.126 [2024-07-15 12:13:21.060525] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:31.385 TLSTESTn1 00:23:31.385 12:13:21 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:31.385 Running I/O for 10 seconds... 00:23:41.384 00:23:41.384 Latency(us) 00:23:41.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.384 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:41.384 Verification LBA range: start 0x0 length 0x2000 00:23:41.384 TLSTESTn1 : 10.03 5091.97 19.89 0.00 0.00 25095.20 4729.99 37156.06 00:23:41.384 =================================================================================================================== 00:23:41.384 Total : 5091.97 19.89 0.00 0.00 25095.20 4729.99 37156.06 00:23:41.384 0 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1190129 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1190129 ']' 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1190129 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1190129 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1190129' 00:23:41.384 killing process with pid 1190129 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1190129 00:23:41.384 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.384 00:23:41.384 Latency(us) 00:23:41.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.384 =================================================================================================================== 00:23:41.384 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.384 [2024-07-15 12:13:31.333357] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:41.384 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1190129 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Lplbe1aQQ2 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lplbe1aQQ2 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lplbe1aQQ2 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Lplbe1aQQ2 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Lplbe1aQQ2' 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1191952 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1191952 /var/tmp/bdevperf.sock 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1191952 ']' 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.644 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.644 [2024-07-15 12:13:31.557266] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:41.644 [2024-07-15 12:13:31.557315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191952 ] 00:23:41.644 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.644 [2024-07-15 12:13:31.617896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.903 [2024-07-15 12:13:31.657311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.903 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.903 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:41.903 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lplbe1aQQ2 00:23:42.163 [2024-07-15 12:13:31.908357] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.163 [2024-07-15 12:13:31.908402] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:42.163 [2024-07-15 12:13:31.908425] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Lplbe1aQQ2 00:23:42.163 request: 00:23:42.163 { 00:23:42.163 "name": "TLSTEST", 00:23:42.163 "trtype": "tcp", 00:23:42.163 "traddr": "10.0.0.2", 00:23:42.163 "adrfam": "ipv4", 00:23:42.163 "trsvcid": "4420", 00:23:42.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.163 "prchk_reftag": false, 00:23:42.163 "prchk_guard": false, 00:23:42.163 "hdgst": false, 00:23:42.163 "ddgst": false, 00:23:42.163 "psk": "/tmp/tmp.Lplbe1aQQ2", 00:23:42.163 "method": "bdev_nvme_attach_controller", 00:23:42.163 "req_id": 1 00:23:42.163 } 00:23:42.163 Got JSON-RPC error response 00:23:42.163 response: 00:23:42.163 { 00:23:42.163 "code": -1, 00:23:42.163 "message": "Operation not permitted" 00:23:42.163 } 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1191952 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1191952 ']' 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1191952 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1191952 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1191952' 00:23:42.163 killing process with pid 1191952 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1191952 00:23:42.163 Received shutdown signal, test time was about 10.000000 seconds 00:23:42.163 00:23:42.163 Latency(us) 00:23:42.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.163 =================================================================================================================== 00:23:42.163 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:42.163 12:13:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1191952 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1189872 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1189872 ']' 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1189872 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1189872 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1189872' 00:23:42.163 killing process with pid 1189872 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1189872 00:23:42.163 [2024-07-15 12:13:32.163404] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:42.163 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1189872 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1192016 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1192016 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1192016 ']' 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.424 12:13:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.424 [2024-07-15 12:13:32.407714] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:42.424 [2024-07-15 12:13:32.407763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.684 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.684 [2024-07-15 12:13:32.481118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.684 [2024-07-15 12:13:32.519466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.684 [2024-07-15 12:13:32.519519] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.684 [2024-07-15 12:13:32.519525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.684 [2024-07-15 12:13:32.519531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.684 [2024-07-15 12:13:32.519536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.684 [2024-07-15 12:13:32.519570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Lplbe1aQQ2 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Lplbe1aQQ2 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Lplbe1aQQ2 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Lplbe1aQQ2 00:23:43.254 12:13:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.522 [2024-07-15 12:13:33.409893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.522 12:13:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:43.780 12:13:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:43.780 [2024-07-15 12:13:33.742729] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:43.780 [2024-07-15 12:13:33.742915] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.780 12:13:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:44.039 malloc0 00:23:44.039 12:13:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lplbe1aQQ2 00:23:44.297 [2024-07-15 12:13:34.272298] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:44.297 [2024-07-15 12:13:34.272326] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:44.297 [2024-07-15 12:13:34.272364] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:44.297 request: 00:23:44.297 { 00:23:44.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.297 "host": "nqn.2016-06.io.spdk:host1", 00:23:44.297 "psk": "/tmp/tmp.Lplbe1aQQ2", 00:23:44.297 "method": "nvmf_subsystem_add_host", 00:23:44.297 "req_id": 1 00:23:44.297 } 00:23:44.297 Got JSON-RPC error response 00:23:44.297 response: 00:23:44.297 { 00:23:44.297 "code": -32603, 00:23:44.297 "message": "Internal error" 00:23:44.297 } 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1192016 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1192016 ']' 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1192016 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.297 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1192016 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1192016' 00:23:44.556 killing process with pid 1192016 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1192016 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1192016 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Lplbe1aQQ2 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1192458 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1192458 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1192458 ']' 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.556 12:13:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.816 [2024-07-15 12:13:34.580798] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:44.816 [2024-07-15 12:13:34.580848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.816 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.816 [2024-07-15 12:13:34.651281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.816 [2024-07-15 12:13:34.692276] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.816 [2024-07-15 12:13:34.692312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.816 [2024-07-15 12:13:34.692319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.816 [2024-07-15 12:13:34.692325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.816 [2024-07-15 12:13:34.692330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.816 [2024-07-15 12:13:34.692353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.385 12:13:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.385 12:13:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:45.385 12:13:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.385 12:13:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:45.385 12:13:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.644 12:13:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.644 12:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Lplbe1aQQ2 00:23:45.644 12:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Lplbe1aQQ2 00:23:45.644 12:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:45.644 [2024-07-15 12:13:35.567134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.644 12:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:45.903 12:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:46.162 [2024-07-15 12:13:35.916043] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.162 [2024-07-15 12:13:35.916239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.162 12:13:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:46.162 malloc0 00:23:46.162 12:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:46.421 12:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lplbe1aQQ2 00:23:46.681 [2024-07-15 12:13:36.441598] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1192722 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1192722 /var/tmp/bdevperf.sock 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1192722 ']' 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.681 12:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.681 [2024-07-15 12:13:36.497982] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:46.681 [2024-07-15 12:13:36.498026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192722 ] 00:23:46.681 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.681 [2024-07-15 12:13:36.563672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.681 [2024-07-15 12:13:36.602845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.940 12:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.940 12:13:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:46.940 12:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lplbe1aQQ2 00:23:46.940 [2024-07-15 12:13:36.854718] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.940 [2024-07-15 12:13:36.854806] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:46.940 TLSTESTn1 00:23:46.940 12:13:36 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:47.200 12:13:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:47.200 "subsystems": [ 00:23:47.200 { 00:23:47.200 "subsystem": "keyring", 00:23:47.200 "config": [] 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "subsystem": "iobuf", 00:23:47.200 "config": [ 00:23:47.200 { 00:23:47.200 "method": "iobuf_set_options", 00:23:47.200 "params": { 00:23:47.200 "small_pool_count": 8192, 00:23:47.200 "large_pool_count": 1024, 00:23:47.200 "small_bufsize": 8192, 00:23:47.200 "large_bufsize": 135168 00:23:47.200 } 00:23:47.200 } 00:23:47.200 ] 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "subsystem": "sock", 00:23:47.200 "config": [ 00:23:47.200 { 00:23:47.200 "method": "sock_set_default_impl", 00:23:47.200 "params": { 00:23:47.200 "impl_name": "posix" 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "sock_impl_set_options", 00:23:47.200 "params": { 00:23:47.200 "impl_name": "ssl", 00:23:47.200 "recv_buf_size": 4096, 00:23:47.200 "send_buf_size": 4096, 00:23:47.200 "enable_recv_pipe": true, 00:23:47.200 "enable_quickack": false, 00:23:47.200 "enable_placement_id": 0, 00:23:47.200 "enable_zerocopy_send_server": true, 00:23:47.200 "enable_zerocopy_send_client": false, 00:23:47.200 "zerocopy_threshold": 0, 00:23:47.200 "tls_version": 0, 00:23:47.200 "enable_ktls": false 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "sock_impl_set_options", 00:23:47.200 "params": { 00:23:47.200 "impl_name": "posix", 00:23:47.200 "recv_buf_size": 2097152, 00:23:47.200 "send_buf_size": 2097152, 00:23:47.200 "enable_recv_pipe": true, 00:23:47.200 "enable_quickack": false, 00:23:47.200 "enable_placement_id": 0, 00:23:47.200 "enable_zerocopy_send_server": true, 00:23:47.200 "enable_zerocopy_send_client": false, 00:23:47.200 "zerocopy_threshold": 0, 00:23:47.200 "tls_version": 0, 00:23:47.200 "enable_ktls": false 00:23:47.200 } 00:23:47.200 } 00:23:47.200 ] 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "subsystem": "vmd", 00:23:47.200 "config": [] 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "subsystem": "accel", 00:23:47.200 "config": [ 00:23:47.200 { 00:23:47.200 "method": "accel_set_options", 00:23:47.200 "params": { 00:23:47.200 "small_cache_size": 128, 00:23:47.200 "large_cache_size": 16, 00:23:47.200 "task_count": 2048, 00:23:47.200 "sequence_count": 2048, 00:23:47.200 "buf_count": 2048 00:23:47.200 } 00:23:47.200 } 00:23:47.200 ] 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "subsystem": "bdev", 00:23:47.200 "config": [ 00:23:47.200 { 00:23:47.200 "method": "bdev_set_options", 00:23:47.200 "params": { 00:23:47.200 "bdev_io_pool_size": 65535, 00:23:47.200 "bdev_io_cache_size": 256, 00:23:47.200 "bdev_auto_examine": true, 00:23:47.200 "iobuf_small_cache_size": 128, 00:23:47.200 "iobuf_large_cache_size": 16 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "bdev_raid_set_options", 00:23:47.200 "params": { 00:23:47.200 "process_window_size_kb": 1024 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "bdev_iscsi_set_options", 00:23:47.200 "params": { 00:23:47.200 "timeout_sec": 30 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "bdev_nvme_set_options", 00:23:47.200 "params": { 00:23:47.200 "action_on_timeout": "none", 00:23:47.200 "timeout_us": 0, 00:23:47.200 "timeout_admin_us": 0, 00:23:47.200 "keep_alive_timeout_ms": 10000, 00:23:47.200 "arbitration_burst": 0, 00:23:47.200 "low_priority_weight": 0, 00:23:47.200 "medium_priority_weight": 0, 00:23:47.200 "high_priority_weight": 0, 00:23:47.200 "nvme_adminq_poll_period_us": 10000, 00:23:47.200 "nvme_ioq_poll_period_us": 0, 00:23:47.200 "io_queue_requests": 0, 00:23:47.200 "delay_cmd_submit": true, 00:23:47.200 "transport_retry_count": 4, 00:23:47.200 "bdev_retry_count": 3, 00:23:47.200 "transport_ack_timeout": 0, 00:23:47.200 "ctrlr_loss_timeout_sec": 0, 00:23:47.200 "reconnect_delay_sec": 0, 00:23:47.200 "fast_io_fail_timeout_sec": 0, 00:23:47.200 "disable_auto_failback": false, 00:23:47.200 "generate_uuids": false, 00:23:47.200 "transport_tos": 0, 00:23:47.200 "nvme_error_stat": false, 00:23:47.200 "rdma_srq_size": 0, 00:23:47.200 "io_path_stat": false, 00:23:47.200 "allow_accel_sequence": false, 00:23:47.200 "rdma_max_cq_size": 0, 00:23:47.200 "rdma_cm_event_timeout_ms": 0, 00:23:47.200 "dhchap_digests": [ 00:23:47.200 "sha256", 00:23:47.200 "sha384", 00:23:47.200 "sha512" 00:23:47.200 ], 00:23:47.200 "dhchap_dhgroups": [ 00:23:47.200 "null", 00:23:47.200 "ffdhe2048", 00:23:47.200 "ffdhe3072", 00:23:47.200 "ffdhe4096", 00:23:47.200 "ffdhe6144", 00:23:47.200 "ffdhe8192" 00:23:47.200 ] 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "bdev_nvme_set_hotplug", 00:23:47.200 "params": { 00:23:47.200 "period_us": 100000, 00:23:47.200 "enable": false 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "bdev_malloc_create", 00:23:47.200 "params": { 00:23:47.200 "name": "malloc0", 00:23:47.200 "num_blocks": 8192, 00:23:47.200 "block_size": 4096, 00:23:47.200 "physical_block_size": 4096, 00:23:47.200 "uuid": "6cc0521f-4e56-41a0-b4b4-c7ce282e34fc", 00:23:47.200 "optimal_io_boundary": 0 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "bdev_wait_for_examine" 00:23:47.200 } 00:23:47.200 ] 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "subsystem": "nbd", 00:23:47.200 "config": [] 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "subsystem": "scheduler", 00:23:47.200 "config": [ 00:23:47.200 { 00:23:47.200 "method": "framework_set_scheduler", 00:23:47.200 "params": { 00:23:47.200 "name": "static" 00:23:47.200 } 00:23:47.200 } 00:23:47.200 ] 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "subsystem": "nvmf", 00:23:47.200 "config": [ 00:23:47.200 { 00:23:47.200 "method": "nvmf_set_config", 00:23:47.200 "params": { 00:23:47.200 "discovery_filter": "match_any", 00:23:47.200 "admin_cmd_passthru": { 00:23:47.200 "identify_ctrlr": false 00:23:47.200 } 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "nvmf_set_max_subsystems", 00:23:47.200 "params": { 00:23:47.200 "max_subsystems": 1024 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "nvmf_set_crdt", 00:23:47.200 "params": { 00:23:47.200 "crdt1": 0, 00:23:47.200 "crdt2": 0, 00:23:47.200 "crdt3": 0 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "nvmf_create_transport", 00:23:47.200 "params": { 00:23:47.200 "trtype": "TCP", 00:23:47.200 "max_queue_depth": 128, 00:23:47.200 "max_io_qpairs_per_ctrlr": 127, 00:23:47.200 "in_capsule_data_size": 4096, 00:23:47.200 "max_io_size": 131072, 00:23:47.200 "io_unit_size": 131072, 00:23:47.200 "max_aq_depth": 128, 00:23:47.200 "num_shared_buffers": 511, 00:23:47.200 "buf_cache_size": 4294967295, 00:23:47.200 "dif_insert_or_strip": false, 00:23:47.200 "zcopy": false, 00:23:47.200 "c2h_success": false, 00:23:47.200 "sock_priority": 0, 00:23:47.200 "abort_timeout_sec": 1, 00:23:47.200 "ack_timeout": 0, 00:23:47.200 "data_wr_pool_size": 0 00:23:47.200 } 00:23:47.200 }, 00:23:47.200 { 00:23:47.200 "method": "nvmf_create_subsystem", 00:23:47.200 "params": { 00:23:47.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.200 "allow_any_host": false, 00:23:47.200 "serial_number": "SPDK00000000000001", 00:23:47.200 "model_number": "SPDK bdev Controller", 00:23:47.201 "max_namespaces": 10, 00:23:47.201 "min_cntlid": 1, 00:23:47.201 "max_cntlid": 65519, 00:23:47.201 "ana_reporting": false 00:23:47.201 } 00:23:47.201 }, 00:23:47.201 { 00:23:47.201 "method": "nvmf_subsystem_add_host", 00:23:47.201 "params": { 00:23:47.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.201 "host": "nqn.2016-06.io.spdk:host1", 00:23:47.201 "psk": "/tmp/tmp.Lplbe1aQQ2" 00:23:47.201 } 00:23:47.201 }, 00:23:47.201 { 00:23:47.201 "method": "nvmf_subsystem_add_ns", 00:23:47.201 "params": { 00:23:47.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.201 "namespace": { 00:23:47.201 "nsid": 1, 00:23:47.201 "bdev_name": "malloc0", 00:23:47.201 "nguid": "6CC0521F4E5641A0B4B4C7CE282E34FC", 00:23:47.201 "uuid": "6cc0521f-4e56-41a0-b4b4-c7ce282e34fc", 00:23:47.201 "no_auto_visible": false 00:23:47.201 } 00:23:47.201 } 00:23:47.201 }, 00:23:47.201 { 00:23:47.201 "method": "nvmf_subsystem_add_listener", 00:23:47.201 "params": { 00:23:47.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.201 "listen_address": { 00:23:47.201 "trtype": "TCP", 00:23:47.201 "adrfam": "IPv4", 00:23:47.201 "traddr": "10.0.0.2", 00:23:47.201 "trsvcid": "4420" 00:23:47.201 }, 00:23:47.201 "secure_channel": true 00:23:47.201 } 00:23:47.201 } 00:23:47.201 ] 00:23:47.201 } 00:23:47.201 ] 00:23:47.201 }' 00:23:47.201 12:13:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:47.460 12:13:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:47.461 "subsystems": [ 00:23:47.461 { 00:23:47.461 "subsystem": "keyring", 00:23:47.461 "config": [] 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "subsystem": "iobuf", 00:23:47.461 "config": [ 00:23:47.461 { 00:23:47.461 "method": "iobuf_set_options", 00:23:47.461 "params": { 00:23:47.461 "small_pool_count": 8192, 00:23:47.461 "large_pool_count": 1024, 00:23:47.461 "small_bufsize": 8192, 00:23:47.461 "large_bufsize": 135168 00:23:47.461 } 00:23:47.461 } 00:23:47.461 ] 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "subsystem": "sock", 00:23:47.461 "config": [ 00:23:47.461 { 00:23:47.461 "method": "sock_set_default_impl", 00:23:47.461 "params": { 00:23:47.461 "impl_name": "posix" 00:23:47.461 } 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "method": "sock_impl_set_options", 00:23:47.461 "params": { 00:23:47.461 "impl_name": "ssl", 00:23:47.461 "recv_buf_size": 4096, 00:23:47.461 "send_buf_size": 4096, 00:23:47.461 "enable_recv_pipe": true, 00:23:47.461 "enable_quickack": false, 00:23:47.461 "enable_placement_id": 0, 00:23:47.461 "enable_zerocopy_send_server": true, 00:23:47.461 "enable_zerocopy_send_client": false, 00:23:47.461 "zerocopy_threshold": 0, 00:23:47.461 "tls_version": 0, 00:23:47.461 "enable_ktls": false 00:23:47.461 } 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "method": "sock_impl_set_options", 00:23:47.461 "params": { 00:23:47.461 "impl_name": "posix", 00:23:47.461 "recv_buf_size": 2097152, 00:23:47.461 "send_buf_size": 2097152, 00:23:47.461 "enable_recv_pipe": true, 00:23:47.461 "enable_quickack": false, 00:23:47.461 "enable_placement_id": 0, 00:23:47.461 "enable_zerocopy_send_server": true, 00:23:47.461 "enable_zerocopy_send_client": false, 00:23:47.461 "zerocopy_threshold": 0, 00:23:47.461 "tls_version": 0, 00:23:47.461 "enable_ktls": false 00:23:47.461 } 00:23:47.461 } 00:23:47.461 ] 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "subsystem": "vmd", 00:23:47.461 "config": [] 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "subsystem": "accel", 00:23:47.461 "config": [ 00:23:47.461 { 00:23:47.461 "method": "accel_set_options", 00:23:47.461 "params": { 00:23:47.461 "small_cache_size": 128, 00:23:47.461 "large_cache_size": 16, 00:23:47.461 "task_count": 2048, 00:23:47.461 "sequence_count": 2048, 00:23:47.461 "buf_count": 2048 00:23:47.461 } 00:23:47.461 } 00:23:47.461 ] 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "subsystem": "bdev", 00:23:47.461 "config": [ 00:23:47.461 { 00:23:47.461 "method": "bdev_set_options", 00:23:47.461 "params": { 00:23:47.461 "bdev_io_pool_size": 65535, 00:23:47.461 "bdev_io_cache_size": 256, 00:23:47.461 "bdev_auto_examine": true, 00:23:47.461 "iobuf_small_cache_size": 128, 00:23:47.461 "iobuf_large_cache_size": 16 00:23:47.461 } 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "method": "bdev_raid_set_options", 00:23:47.461 "params": { 00:23:47.461 "process_window_size_kb": 1024 00:23:47.461 } 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "method": "bdev_iscsi_set_options", 00:23:47.461 "params": { 00:23:47.461 "timeout_sec": 30 00:23:47.461 } 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "method": "bdev_nvme_set_options", 00:23:47.461 "params": { 00:23:47.461 "action_on_timeout": "none", 00:23:47.461 "timeout_us": 0, 00:23:47.461 "timeout_admin_us": 0, 00:23:47.461 "keep_alive_timeout_ms": 10000, 00:23:47.461 "arbitration_burst": 0, 00:23:47.461 "low_priority_weight": 0, 00:23:47.461 "medium_priority_weight": 0, 00:23:47.461 "high_priority_weight": 0, 00:23:47.461 "nvme_adminq_poll_period_us": 10000, 00:23:47.461 "nvme_ioq_poll_period_us": 0, 00:23:47.461 "io_queue_requests": 512, 00:23:47.461 "delay_cmd_submit": true, 00:23:47.461 "transport_retry_count": 4, 00:23:47.461 "bdev_retry_count": 3, 00:23:47.461 "transport_ack_timeout": 0, 00:23:47.461 "ctrlr_loss_timeout_sec": 0, 00:23:47.461 "reconnect_delay_sec": 0, 00:23:47.461 "fast_io_fail_timeout_sec": 0, 00:23:47.461 "disable_auto_failback": false, 00:23:47.461 "generate_uuids": false, 00:23:47.461 "transport_tos": 0, 00:23:47.461 "nvme_error_stat": false, 00:23:47.461 "rdma_srq_size": 0, 00:23:47.461 "io_path_stat": false, 00:23:47.461 "allow_accel_sequence": false, 00:23:47.461 "rdma_max_cq_size": 0, 00:23:47.461 "rdma_cm_event_timeout_ms": 0, 00:23:47.461 "dhchap_digests": [ 00:23:47.461 "sha256", 00:23:47.461 "sha384", 00:23:47.461 "sha512" 00:23:47.461 ], 00:23:47.461 "dhchap_dhgroups": [ 00:23:47.461 "null", 00:23:47.461 "ffdhe2048", 00:23:47.461 "ffdhe3072", 00:23:47.461 "ffdhe4096", 00:23:47.461 "ffdhe6144", 00:23:47.461 "ffdhe8192" 00:23:47.461 ] 00:23:47.461 } 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "method": "bdev_nvme_attach_controller", 00:23:47.461 "params": { 00:23:47.461 "name": "TLSTEST", 00:23:47.461 "trtype": "TCP", 00:23:47.461 "adrfam": "IPv4", 00:23:47.461 "traddr": "10.0.0.2", 00:23:47.461 "trsvcid": "4420", 00:23:47.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.461 "prchk_reftag": false, 00:23:47.461 "prchk_guard": false, 00:23:47.461 "ctrlr_loss_timeout_sec": 0, 00:23:47.461 "reconnect_delay_sec": 0, 00:23:47.461 "fast_io_fail_timeout_sec": 0, 00:23:47.461 "psk": "/tmp/tmp.Lplbe1aQQ2", 00:23:47.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:47.461 "hdgst": false, 00:23:47.461 "ddgst": false 00:23:47.461 } 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "method": "bdev_nvme_set_hotplug", 00:23:47.461 "params": { 00:23:47.461 "period_us": 100000, 00:23:47.461 "enable": false 00:23:47.461 } 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "method": "bdev_wait_for_examine" 00:23:47.461 } 00:23:47.461 ] 00:23:47.461 }, 00:23:47.461 { 00:23:47.461 "subsystem": "nbd", 00:23:47.461 "config": [] 00:23:47.461 } 00:23:47.461 ] 00:23:47.461 }' 00:23:47.461 12:13:37 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1192722 00:23:47.461 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1192722 ']' 00:23:47.461 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1192722 00:23:47.461 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:47.461 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.461 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1192722 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1192722' 00:23:47.721 killing process with pid 1192722 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1192722 00:23:47.721 Received shutdown signal, test time was about 10.000000 seconds 00:23:47.721 00:23:47.721 Latency(us) 00:23:47.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.721 =================================================================================================================== 00:23:47.721 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:47.721 [2024-07-15 12:13:37.482539] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1192722 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1192458 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1192458 ']' 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1192458 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1192458 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1192458' 00:23:47.721 killing process with pid 1192458 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1192458 00:23:47.721 [2024-07-15 12:13:37.696122] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:47.721 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1192458 00:23:47.980 12:13:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:47.980 12:13:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.980 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:47.980 12:13:37 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:47.980 "subsystems": [ 00:23:47.980 { 00:23:47.980 "subsystem": "keyring", 00:23:47.980 "config": [] 00:23:47.980 }, 00:23:47.980 { 00:23:47.980 "subsystem": "iobuf", 00:23:47.980 "config": [ 00:23:47.980 { 00:23:47.980 "method": "iobuf_set_options", 00:23:47.980 "params": { 00:23:47.980 "small_pool_count": 8192, 00:23:47.980 "large_pool_count": 1024, 00:23:47.980 "small_bufsize": 8192, 00:23:47.980 "large_bufsize": 135168 00:23:47.980 } 00:23:47.980 } 00:23:47.980 ] 00:23:47.980 }, 00:23:47.980 { 00:23:47.980 "subsystem": "sock", 00:23:47.980 "config": [ 00:23:47.980 { 00:23:47.980 "method": "sock_set_default_impl", 00:23:47.980 "params": { 00:23:47.980 "impl_name": "posix" 00:23:47.980 } 00:23:47.980 }, 00:23:47.980 { 00:23:47.980 "method": "sock_impl_set_options", 00:23:47.980 "params": { 00:23:47.980 "impl_name": "ssl", 00:23:47.980 "recv_buf_size": 4096, 00:23:47.980 "send_buf_size": 4096, 00:23:47.980 "enable_recv_pipe": true, 00:23:47.980 "enable_quickack": false, 00:23:47.981 "enable_placement_id": 0, 00:23:47.981 "enable_zerocopy_send_server": true, 00:23:47.981 "enable_zerocopy_send_client": false, 00:23:47.981 "zerocopy_threshold": 0, 00:23:47.981 "tls_version": 0, 00:23:47.981 "enable_ktls": false 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "sock_impl_set_options", 00:23:47.981 "params": { 00:23:47.981 "impl_name": "posix", 00:23:47.981 "recv_buf_size": 2097152, 00:23:47.981 "send_buf_size": 2097152, 00:23:47.981 "enable_recv_pipe": true, 00:23:47.981 "enable_quickack": false, 00:23:47.981 "enable_placement_id": 0, 00:23:47.981 "enable_zerocopy_send_server": true, 00:23:47.981 "enable_zerocopy_send_client": false, 00:23:47.981 "zerocopy_threshold": 0, 00:23:47.981 "tls_version": 0, 00:23:47.981 "enable_ktls": false 00:23:47.981 } 00:23:47.981 } 00:23:47.981 ] 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "subsystem": "vmd", 00:23:47.981 "config": [] 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "subsystem": "accel", 00:23:47.981 "config": [ 00:23:47.981 { 00:23:47.981 "method": "accel_set_options", 00:23:47.981 "params": { 00:23:47.981 "small_cache_size": 128, 00:23:47.981 "large_cache_size": 16, 00:23:47.981 "task_count": 2048, 00:23:47.981 "sequence_count": 2048, 00:23:47.981 "buf_count": 2048 00:23:47.981 } 00:23:47.981 } 00:23:47.981 ] 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "subsystem": "bdev", 00:23:47.981 "config": [ 00:23:47.981 { 00:23:47.981 "method": "bdev_set_options", 00:23:47.981 "params": { 00:23:47.981 "bdev_io_pool_size": 65535, 00:23:47.981 "bdev_io_cache_size": 256, 00:23:47.981 "bdev_auto_examine": true, 00:23:47.981 "iobuf_small_cache_size": 128, 00:23:47.981 "iobuf_large_cache_size": 16 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "bdev_raid_set_options", 00:23:47.981 "params": { 00:23:47.981 "process_window_size_kb": 1024 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "bdev_iscsi_set_options", 00:23:47.981 "params": { 00:23:47.981 "timeout_sec": 30 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "bdev_nvme_set_options", 00:23:47.981 "params": { 00:23:47.981 "action_on_timeout": "none", 00:23:47.981 "timeout_us": 0, 00:23:47.981 "timeout_admin_us": 0, 00:23:47.981 "keep_alive_timeout_ms": 10000, 00:23:47.981 "arbitration_burst": 0, 00:23:47.981 "low_priority_weight": 0, 00:23:47.981 "medium_priority_weight": 0, 00:23:47.981 "high_priority_weight": 0, 00:23:47.981 "nvme_adminq_poll_period_us": 10000, 00:23:47.981 "nvme_ioq_poll_period_us": 0, 00:23:47.981 "io_queue_requests": 0, 00:23:47.981 "delay_cmd_submit": true, 00:23:47.981 "transport_retry_count": 4, 00:23:47.981 "bdev_retry_count": 3, 00:23:47.981 "transport_ack_timeout": 0, 00:23:47.981 "ctrlr_loss_timeout_sec": 0, 00:23:47.981 "reconnect_delay_sec": 0, 00:23:47.981 "fast_io_fail_timeout_sec": 0, 00:23:47.981 "disable_auto_failback": false, 00:23:47.981 "generate_uuids": false, 00:23:47.981 "transport_tos": 0, 00:23:47.981 "nvme_error_stat": false, 00:23:47.981 "rdma_srq_size": 0, 00:23:47.981 "io_path_stat": false, 00:23:47.981 "allow_accel_sequence": false, 00:23:47.981 "rdma_max_cq_size": 0, 00:23:47.981 "rdma_cm_event_timeout_ms": 0, 00:23:47.981 "dhchap_digests": [ 00:23:47.981 "sha256", 00:23:47.981 "sha384", 00:23:47.981 "sha512" 00:23:47.981 ], 00:23:47.981 "dhchap_dhgroups": [ 00:23:47.981 "null", 00:23:47.981 "ffdhe2048", 00:23:47.981 "ffdhe3072", 00:23:47.981 "ffdhe4096", 00:23:47.981 "ffdhe6144", 00:23:47.981 "ffdhe8192" 00:23:47.981 ] 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "bdev_nvme_set_hotplug", 00:23:47.981 "params": { 00:23:47.981 "period_us": 100000, 00:23:47.981 "enable": false 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "bdev_malloc_create", 00:23:47.981 "params": { 00:23:47.981 "name": "malloc0", 00:23:47.981 "num_blocks": 8192, 00:23:47.981 "block_size": 4096, 00:23:47.981 "physical_block_size": 4096, 00:23:47.981 "uuid": "6cc0521f-4e56-41a0-b4b4-c7ce282e34fc", 00:23:47.981 "optimal_io_boundary": 0 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "bdev_wait_for_examine" 00:23:47.981 } 00:23:47.981 ] 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "subsystem": "nbd", 00:23:47.981 "config": [] 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "subsystem": "scheduler", 00:23:47.981 "config": [ 00:23:47.981 { 00:23:47.981 "method": "framework_set_scheduler", 00:23:47.981 "params": { 00:23:47.981 "name": "static" 00:23:47.981 } 00:23:47.981 } 00:23:47.981 ] 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "subsystem": "nvmf", 00:23:47.981 "config": [ 00:23:47.981 { 00:23:47.981 "method": "nvmf_set_config", 00:23:47.981 "params": { 00:23:47.981 "discovery_filter": "match_any", 00:23:47.981 "admin_cmd_passthru": { 00:23:47.981 "identify_ctrlr": false 00:23:47.981 } 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "nvmf_set_max_subsystems", 00:23:47.981 "params": { 00:23:47.981 "max_subsystems": 1024 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "nvmf_set_crdt", 00:23:47.981 "params": { 00:23:47.981 "crdt1": 0, 00:23:47.981 "crdt2": 0, 00:23:47.981 "crdt3": 0 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "nvmf_create_transport", 00:23:47.981 "params": { 00:23:47.981 "trtype": "TCP", 00:23:47.981 "max_queue_depth": 128, 00:23:47.981 "max_io_qpairs_per_ctrlr": 127, 00:23:47.981 "in_capsule_data_size": 4096, 00:23:47.981 "max_io_size": 131072, 00:23:47.981 "io_unit_size": 131072, 00:23:47.981 "max_aq_depth": 128, 00:23:47.981 "num_shared_buffers": 511, 00:23:47.981 "buf_cache_size": 4294967295, 00:23:47.981 "dif_insert_or_strip": false, 00:23:47.981 "zcopy": false, 00:23:47.981 "c2h_success": false, 00:23:47.981 "sock_priority": 0, 00:23:47.981 "abort_timeout_sec": 1, 00:23:47.981 "ack_timeout": 0, 00:23:47.981 "data_wr_pool_size": 0 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "nvmf_create_subsystem", 00:23:47.981 "params": { 00:23:47.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.981 "allow_any_host": false, 00:23:47.981 "serial_number": "SPDK00000000000001", 00:23:47.981 "model_number": "SPDK bdev Controller", 00:23:47.981 "max_namespaces": 10, 00:23:47.981 "min_cntlid": 1, 00:23:47.981 "max_cntlid": 65519, 00:23:47.981 "ana_reporting": false 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "nvmf_subsystem_add_host", 00:23:47.981 "params": { 00:23:47.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.981 "host": "nqn.2016-06.io.spdk:host1", 00:23:47.981 "psk": "/tmp/tmp.Lplbe1aQQ2" 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "nvmf_subsystem_add_ns", 00:23:47.981 "params": { 00:23:47.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.981 "namespace": { 00:23:47.981 "nsid": 1, 00:23:47.981 "bdev_name": "malloc0", 00:23:47.981 "nguid": "6CC0521F4E5641A0B4B4C7CE282E34FC", 00:23:47.981 "uuid": "6cc0521f-4e56-41a0-b4b4-c7ce282e34fc", 00:23:47.981 "no_auto_visible": false 00:23:47.981 } 00:23:47.981 } 00:23:47.981 }, 00:23:47.981 { 00:23:47.981 "method": "nvmf_subsystem_add_listener", 00:23:47.981 "params": { 00:23:47.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.981 "listen_address": { 00:23:47.981 "trtype": "TCP", 00:23:47.981 "adrfam": "IPv4", 00:23:47.981 "traddr": "10.0.0.2", 00:23:47.981 "trsvcid": "4420" 00:23:47.981 }, 00:23:47.981 "secure_channel": true 00:23:47.981 } 00:23:47.981 } 00:23:47.981 ] 00:23:47.981 } 00:23:47.981 ] 00:23:47.981 }' 00:23:47.981 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.982 12:13:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1192967 00:23:47.982 12:13:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:47.982 12:13:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1192967 00:23:47.982 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1192967 ']' 00:23:47.982 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.982 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.982 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.982 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.982 12:13:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.982 [2024-07-15 12:13:37.931887] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:47.982 [2024-07-15 12:13:37.931935] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.982 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.241 [2024-07-15 12:13:37.993454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.241 [2024-07-15 12:13:38.033900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.241 [2024-07-15 12:13:38.033939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.241 [2024-07-15 12:13:38.033946] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.241 [2024-07-15 12:13:38.033952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.241 [2024-07-15 12:13:38.033957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.241 [2024-07-15 12:13:38.034013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.241 [2024-07-15 12:13:38.232619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.500 [2024-07-15 12:13:38.248592] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:48.500 [2024-07-15 12:13:38.264648] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:48.500 [2024-07-15 12:13:38.275529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.759 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.759 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:48.759 12:13:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.759 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:48.759 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.019 12:13:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.019 12:13:38 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1193209 00:23:49.019 12:13:38 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1193209 /var/tmp/bdevperf.sock 00:23:49.019 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1193209 ']' 00:23:49.019 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.019 12:13:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:49.019 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.019 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.019 12:13:38 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:49.019 "subsystems": [ 00:23:49.019 { 00:23:49.020 "subsystem": "keyring", 00:23:49.020 "config": [] 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "subsystem": "iobuf", 00:23:49.020 "config": [ 00:23:49.020 { 00:23:49.020 "method": "iobuf_set_options", 00:23:49.020 "params": { 00:23:49.020 "small_pool_count": 8192, 00:23:49.020 "large_pool_count": 1024, 00:23:49.020 "small_bufsize": 8192, 00:23:49.020 "large_bufsize": 135168 00:23:49.020 } 00:23:49.020 } 00:23:49.020 ] 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "subsystem": "sock", 00:23:49.020 "config": [ 00:23:49.020 { 00:23:49.020 "method": "sock_set_default_impl", 00:23:49.020 "params": { 00:23:49.020 "impl_name": "posix" 00:23:49.020 } 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "method": "sock_impl_set_options", 00:23:49.020 "params": { 00:23:49.020 "impl_name": "ssl", 00:23:49.020 "recv_buf_size": 4096, 00:23:49.020 "send_buf_size": 4096, 00:23:49.020 "enable_recv_pipe": true, 00:23:49.020 "enable_quickack": false, 00:23:49.020 "enable_placement_id": 0, 00:23:49.020 "enable_zerocopy_send_server": true, 00:23:49.020 "enable_zerocopy_send_client": false, 00:23:49.020 "zerocopy_threshold": 0, 00:23:49.020 "tls_version": 0, 00:23:49.020 "enable_ktls": false 00:23:49.020 } 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "method": "sock_impl_set_options", 00:23:49.020 "params": { 00:23:49.020 "impl_name": "posix", 00:23:49.020 "recv_buf_size": 2097152, 00:23:49.020 "send_buf_size": 2097152, 00:23:49.020 "enable_recv_pipe": true, 00:23:49.020 "enable_quickack": false, 00:23:49.020 "enable_placement_id": 0, 00:23:49.020 "enable_zerocopy_send_server": true, 00:23:49.020 "enable_zerocopy_send_client": false, 00:23:49.020 "zerocopy_threshold": 0, 00:23:49.020 "tls_version": 0, 00:23:49.020 "enable_ktls": false 00:23:49.020 } 00:23:49.020 } 00:23:49.020 ] 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "subsystem": "vmd", 00:23:49.020 "config": [] 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "subsystem": "accel", 00:23:49.020 "config": [ 00:23:49.020 { 00:23:49.020 "method": "accel_set_options", 00:23:49.020 "params": { 00:23:49.020 "small_cache_size": 128, 00:23:49.020 "large_cache_size": 16, 00:23:49.020 "task_count": 2048, 00:23:49.020 "sequence_count": 2048, 00:23:49.020 "buf_count": 2048 00:23:49.020 } 00:23:49.020 } 00:23:49.020 ] 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "subsystem": "bdev", 00:23:49.020 "config": [ 00:23:49.020 { 00:23:49.020 "method": "bdev_set_options", 00:23:49.020 "params": { 00:23:49.020 "bdev_io_pool_size": 65535, 00:23:49.020 "bdev_io_cache_size": 256, 00:23:49.020 "bdev_auto_examine": true, 00:23:49.020 "iobuf_small_cache_size": 128, 00:23:49.020 "iobuf_large_cache_size": 16 00:23:49.020 } 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "method": "bdev_raid_set_options", 00:23:49.020 "params": { 00:23:49.020 "process_window_size_kb": 1024 00:23:49.020 } 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "method": "bdev_iscsi_set_options", 00:23:49.020 "params": { 00:23:49.020 "timeout_sec": 30 00:23:49.020 } 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "method": "bdev_nvme_set_options", 00:23:49.020 "params": { 00:23:49.020 "action_on_timeout": "none", 00:23:49.020 "timeout_us": 0, 00:23:49.020 "timeout_admin_us": 0, 00:23:49.020 "keep_alive_timeout_ms": 10000, 00:23:49.020 "arbitration_burst": 0, 00:23:49.020 "low_priority_weight": 0, 00:23:49.020 "medium_priority_weight": 0, 00:23:49.020 "high_priority_weight": 0, 00:23:49.020 "nvme_adminq_poll_period_us": 10000, 00:23:49.020 "nvme_ioq_poll_period_us": 0, 00:23:49.020 "io_queue_requests": 512, 00:23:49.020 "delay_cmd_submit": true, 00:23:49.020 "transport_retry_count": 4, 00:23:49.020 "bdev_retry_count": 3, 00:23:49.020 "transport_ack_timeout": 0, 00:23:49.020 "ctrlr_loss_timeout_sec": 0, 00:23:49.020 "reconnect_delay_sec": 0, 00:23:49.020 "fast_io_fail_timeout_sec": 0, 00:23:49.020 "disable_auto_failback": false, 00:23:49.020 "generate_uuids": false, 00:23:49.020 "transport_tos": 0, 00:23:49.020 "nvme_error_stat": false, 00:23:49.020 "rdma_srq_size": 0, 00:23:49.020 "io_path_stat": false, 00:23:49.020 "allow_accel_sequence": false, 00:23:49.020 "rdma_max_cq_size": 0, 00:23:49.020 "rdma_cm_event_timeout_ms": 0, 00:23:49.020 "dhchap_digests": [ 00:23:49.020 "sha256", 00:23:49.020 "sha384", 00:23:49.020 "sha512" 00:23:49.020 ], 00:23:49.020 "dhchap_dhgroups": [ 00:23:49.020 "null", 00:23:49.020 "ffdhe2048", 00:23:49.020 "ffdhe3072", 00:23:49.020 "ffdhe4096", 00:23:49.020 "ffdhe6144", 00:23:49.020 "ffdhe8192" 00:23:49.020 ] 00:23:49.020 } 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "method": "bdev_nvme_attach_controller", 00:23:49.020 "params": { 00:23:49.020 "name": "TLSTEST", 00:23:49.020 "trtype": "TCP", 00:23:49.020 "adrfam": "IPv4", 00:23:49.020 "traddr": "10.0.0.2", 00:23:49.020 "trsvcid": "4420", 00:23:49.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.020 "prchk_reftag": false, 00:23:49.020 "prchk_guard": false, 00:23:49.020 "ctrlr_loss_timeout_sec": 0, 00:23:49.020 "reconnect_delay_sec": 0, 00:23:49.020 "fast_io_fail_timeout_sec": 0, 00:23:49.020 "psk": "/tmp/tmp.Lplbe1aQQ2", 00:23:49.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.020 "hdgst": false, 00:23:49.020 "ddgst": false 00:23:49.020 } 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "method": "bdev_nvme_set_hotplug", 00:23:49.020 "params": { 00:23:49.020 "period_us": 100000, 00:23:49.020 "enable": false 00:23:49.020 } 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "method": "bdev_wait_for_examine" 00:23:49.020 } 00:23:49.020 ] 00:23:49.020 }, 00:23:49.020 { 00:23:49.020 "subsystem": "nbd", 00:23:49.020 "config": [] 00:23:49.020 } 00:23:49.020 ] 00:23:49.020 }' 00:23:49.020 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.020 12:13:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.020 [2024-07-15 12:13:38.810366] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:23:49.020 [2024-07-15 12:13:38.810415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193209 ] 00:23:49.020 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.020 [2024-07-15 12:13:38.878564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.020 [2024-07-15 12:13:38.917546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.279 [2024-07-15 12:13:39.055369] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.279 [2024-07-15 12:13:39.055449] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:49.847 12:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.847 12:13:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:49.847 12:13:39 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:49.847 Running I/O for 10 seconds... 00:23:59.824 00:23:59.824 Latency(us) 00:23:59.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.824 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:59.824 Verification LBA range: start 0x0 length 0x2000 00:23:59.824 TLSTESTn1 : 10.03 4341.52 16.96 0.00 0.00 29419.42 6667.58 42854.85 00:23:59.824 =================================================================================================================== 00:23:59.824 Total : 4341.52 16.96 0.00 0.00 29419.42 6667.58 42854.85 00:23:59.824 0 00:23:59.824 12:13:49 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.825 12:13:49 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1193209 00:23:59.825 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1193209 ']' 00:23:59.825 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1193209 00:23:59.825 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:59.825 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.825 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1193209 00:24:00.085 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:00.085 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:00.085 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1193209' 00:24:00.085 killing process with pid 1193209 00:24:00.085 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1193209 00:24:00.085 Received shutdown signal, test time was about 10.000000 seconds 00:24:00.085 00:24:00.085 Latency(us) 00:24:00.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.085 =================================================================================================================== 00:24:00.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.085 [2024-07-15 12:13:49.836905] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:00.085 12:13:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1193209 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1192967 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1192967 ']' 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1192967 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1192967 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1192967' 00:24:00.085 killing process with pid 1192967 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1192967 00:24:00.085 [2024-07-15 12:13:50.056674] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:00.085 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1192967 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1195057 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1195057 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1195057 ']' 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.345 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.345 [2024-07-15 12:13:50.292585] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:24:00.345 [2024-07-15 12:13:50.292628] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.345 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.603 [2024-07-15 12:13:50.362549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.603 [2024-07-15 12:13:50.402932] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.603 [2024-07-15 12:13:50.402971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.603 [2024-07-15 12:13:50.402978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.603 [2024-07-15 12:13:50.402984] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.603 [2024-07-15 12:13:50.402989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.603 [2024-07-15 12:13:50.403007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.603 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.603 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:00.603 12:13:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.603 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.603 12:13:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.603 12:13:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.603 12:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Lplbe1aQQ2 00:24:00.603 12:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Lplbe1aQQ2 00:24:00.603 12:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:00.861 [2024-07-15 12:13:50.676838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.861 12:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:01.119 12:13:50 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:01.119 [2024-07-15 12:13:51.049789] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.119 [2024-07-15 12:13:51.049984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.119 12:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:01.377 malloc0 00:24:01.377 12:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Lplbe1aQQ2 00:24:01.635 [2024-07-15 12:13:51.595383] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1195306 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1195306 /var/tmp/bdevperf.sock 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1195306 ']' 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.635 12:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.894 [2024-07-15 12:13:51.655161] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:24:01.894 [2024-07-15 12:13:51.655206] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195306 ] 00:24:01.894 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.894 [2024-07-15 12:13:51.721192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.894 [2024-07-15 12:13:51.761045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.894 12:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.894 12:13:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:01.894 12:13:51 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Lplbe1aQQ2 00:24:02.153 12:13:52 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:02.411 [2024-07-15 12:13:52.185458] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.411 nvme0n1 00:24:02.411 12:13:52 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.411 Running I/O for 1 seconds... 00:24:03.787 00:24:03.787 Latency(us) 00:24:03.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.787 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:03.787 Verification LBA range: start 0x0 length 0x2000 00:24:03.787 nvme0n1 : 1.01 5402.08 21.10 0.00 0.00 23516.58 4729.99 40803.28 00:24:03.787 =================================================================================================================== 00:24:03.787 Total : 5402.08 21.10 0.00 0.00 23516.58 4729.99 40803.28 00:24:03.787 0 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1195306 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1195306 ']' 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1195306 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195306 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195306' 00:24:03.787 killing process with pid 1195306 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1195306 00:24:03.787 Received shutdown signal, test time was about 1.000000 seconds 00:24:03.787 00:24:03.787 Latency(us) 00:24:03.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.787 =================================================================================================================== 00:24:03.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1195306 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1195057 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1195057 ']' 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1195057 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195057 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195057' 00:24:03.787 killing process with pid 1195057 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1195057 00:24:03.787 [2024-07-15 12:13:53.671753] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:03.787 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1195057 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1195659 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1195659 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1195659 ']' 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.046 12:13:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.046 [2024-07-15 12:13:53.908118] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:24:04.046 [2024-07-15 12:13:53.908168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.046 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.046 [2024-07-15 12:13:53.977772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.046 [2024-07-15 12:13:54.017167] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.046 [2024-07-15 12:13:54.017207] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.046 [2024-07-15 12:13:54.017214] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.046 [2024-07-15 12:13:54.017222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.046 [2024-07-15 12:13:54.017233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.046 [2024-07-15 12:13:54.017253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 [2024-07-15 12:13:54.154332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.305 malloc0 00:24:04.305 [2024-07-15 12:13:54.182623] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.305 [2024-07-15 12:13:54.182818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1195795 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1195795 /var/tmp/bdevperf.sock 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1195795 ']' 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:04.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.305 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.305 [2024-07-15 12:13:54.254344] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:24:04.305 [2024-07-15 12:13:54.254384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195795 ] 00:24:04.305 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.564 [2024-07-15 12:13:54.320652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.564 [2024-07-15 12:13:54.362111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.564 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.564 12:13:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:04.564 12:13:54 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Lplbe1aQQ2 00:24:04.823 12:13:54 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:04.823 [2024-07-15 12:13:54.771533] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:05.080 nvme0n1 00:24:05.080 12:13:54 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.080 Running I/O for 1 seconds... 00:24:06.016 00:24:06.016 Latency(us) 00:24:06.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.016 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:06.016 Verification LBA range: start 0x0 length 0x2000 00:24:06.016 nvme0n1 : 1.01 5521.21 21.57 0.00 0.00 23010.45 5442.34 42170.99 00:24:06.016 =================================================================================================================== 00:24:06.016 Total : 5521.21 21.57 0.00 0.00 23010.45 5442.34 42170.99 00:24:06.016 0 00:24:06.016 12:13:55 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:06.016 12:13:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.016 12:13:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.275 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.275 12:13:56 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:06.276 "subsystems": [ 00:24:06.276 { 00:24:06.276 "subsystem": "keyring", 00:24:06.276 "config": [ 00:24:06.276 { 00:24:06.276 "method": "keyring_file_add_key", 00:24:06.276 "params": { 00:24:06.276 "name": "key0", 00:24:06.276 "path": "/tmp/tmp.Lplbe1aQQ2" 00:24:06.276 } 00:24:06.276 } 00:24:06.276 ] 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "subsystem": "iobuf", 00:24:06.276 "config": [ 00:24:06.276 { 00:24:06.276 "method": "iobuf_set_options", 00:24:06.276 "params": { 00:24:06.276 "small_pool_count": 8192, 00:24:06.276 "large_pool_count": 1024, 00:24:06.276 "small_bufsize": 8192, 00:24:06.276 "large_bufsize": 135168 00:24:06.276 } 00:24:06.276 } 00:24:06.276 ] 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "subsystem": "sock", 00:24:06.276 "config": [ 00:24:06.276 { 00:24:06.276 "method": "sock_set_default_impl", 00:24:06.276 "params": { 00:24:06.276 "impl_name": "posix" 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "sock_impl_set_options", 00:24:06.276 "params": { 00:24:06.276 "impl_name": "ssl", 00:24:06.276 "recv_buf_size": 4096, 00:24:06.276 "send_buf_size": 4096, 00:24:06.276 "enable_recv_pipe": true, 00:24:06.276 "enable_quickack": false, 00:24:06.276 "enable_placement_id": 0, 00:24:06.276 "enable_zerocopy_send_server": true, 00:24:06.276 "enable_zerocopy_send_client": false, 00:24:06.276 "zerocopy_threshold": 0, 00:24:06.276 "tls_version": 0, 00:24:06.276 "enable_ktls": false 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "sock_impl_set_options", 00:24:06.276 "params": { 00:24:06.276 "impl_name": "posix", 00:24:06.276 "recv_buf_size": 2097152, 00:24:06.276 "send_buf_size": 2097152, 00:24:06.276 "enable_recv_pipe": true, 00:24:06.276 "enable_quickack": false, 00:24:06.276 "enable_placement_id": 0, 00:24:06.276 "enable_zerocopy_send_server": true, 00:24:06.276 "enable_zerocopy_send_client": false, 00:24:06.276 "zerocopy_threshold": 0, 00:24:06.276 "tls_version": 0, 00:24:06.276 "enable_ktls": false 00:24:06.276 } 00:24:06.276 } 00:24:06.276 ] 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "subsystem": "vmd", 00:24:06.276 "config": [] 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "subsystem": "accel", 00:24:06.276 "config": [ 00:24:06.276 { 00:24:06.276 "method": "accel_set_options", 00:24:06.276 "params": { 00:24:06.276 "small_cache_size": 128, 00:24:06.276 "large_cache_size": 16, 00:24:06.276 "task_count": 2048, 00:24:06.276 "sequence_count": 2048, 00:24:06.276 "buf_count": 2048 00:24:06.276 } 00:24:06.276 } 00:24:06.276 ] 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "subsystem": "bdev", 00:24:06.276 "config": [ 00:24:06.276 { 00:24:06.276 "method": "bdev_set_options", 00:24:06.276 "params": { 00:24:06.276 "bdev_io_pool_size": 65535, 00:24:06.276 "bdev_io_cache_size": 256, 00:24:06.276 "bdev_auto_examine": true, 00:24:06.276 "iobuf_small_cache_size": 128, 00:24:06.276 "iobuf_large_cache_size": 16 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "bdev_raid_set_options", 00:24:06.276 "params": { 00:24:06.276 "process_window_size_kb": 1024 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "bdev_iscsi_set_options", 00:24:06.276 "params": { 00:24:06.276 "timeout_sec": 30 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "bdev_nvme_set_options", 00:24:06.276 "params": { 00:24:06.276 "action_on_timeout": "none", 00:24:06.276 "timeout_us": 0, 00:24:06.276 "timeout_admin_us": 0, 00:24:06.276 "keep_alive_timeout_ms": 10000, 00:24:06.276 "arbitration_burst": 0, 00:24:06.276 "low_priority_weight": 0, 00:24:06.276 "medium_priority_weight": 0, 00:24:06.276 "high_priority_weight": 0, 00:24:06.276 "nvme_adminq_poll_period_us": 10000, 00:24:06.276 "nvme_ioq_poll_period_us": 0, 00:24:06.276 "io_queue_requests": 0, 00:24:06.276 "delay_cmd_submit": true, 00:24:06.276 "transport_retry_count": 4, 00:24:06.276 "bdev_retry_count": 3, 00:24:06.276 "transport_ack_timeout": 0, 00:24:06.276 "ctrlr_loss_timeout_sec": 0, 00:24:06.276 "reconnect_delay_sec": 0, 00:24:06.276 "fast_io_fail_timeout_sec": 0, 00:24:06.276 "disable_auto_failback": false, 00:24:06.276 "generate_uuids": false, 00:24:06.276 "transport_tos": 0, 00:24:06.276 "nvme_error_stat": false, 00:24:06.276 "rdma_srq_size": 0, 00:24:06.276 "io_path_stat": false, 00:24:06.276 "allow_accel_sequence": false, 00:24:06.276 "rdma_max_cq_size": 0, 00:24:06.276 "rdma_cm_event_timeout_ms": 0, 00:24:06.276 "dhchap_digests": [ 00:24:06.276 "sha256", 00:24:06.276 "sha384", 00:24:06.276 "sha512" 00:24:06.276 ], 00:24:06.276 "dhchap_dhgroups": [ 00:24:06.276 "null", 00:24:06.276 "ffdhe2048", 00:24:06.276 "ffdhe3072", 00:24:06.276 "ffdhe4096", 00:24:06.276 "ffdhe6144", 00:24:06.276 "ffdhe8192" 00:24:06.276 ] 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "bdev_nvme_set_hotplug", 00:24:06.276 "params": { 00:24:06.276 "period_us": 100000, 00:24:06.276 "enable": false 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "bdev_malloc_create", 00:24:06.276 "params": { 00:24:06.276 "name": "malloc0", 00:24:06.276 "num_blocks": 8192, 00:24:06.276 "block_size": 4096, 00:24:06.276 "physical_block_size": 4096, 00:24:06.276 "uuid": "ba7dbd6b-fd02-4d64-a222-60d38994e994", 00:24:06.276 "optimal_io_boundary": 0 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "bdev_wait_for_examine" 00:24:06.276 } 00:24:06.276 ] 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "subsystem": "nbd", 00:24:06.276 "config": [] 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "subsystem": "scheduler", 00:24:06.276 "config": [ 00:24:06.276 { 00:24:06.276 "method": "framework_set_scheduler", 00:24:06.276 "params": { 00:24:06.276 "name": "static" 00:24:06.276 } 00:24:06.276 } 00:24:06.276 ] 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "subsystem": "nvmf", 00:24:06.276 "config": [ 00:24:06.276 { 00:24:06.276 "method": "nvmf_set_config", 00:24:06.276 "params": { 00:24:06.276 "discovery_filter": "match_any", 00:24:06.276 "admin_cmd_passthru": { 00:24:06.276 "identify_ctrlr": false 00:24:06.276 } 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "nvmf_set_max_subsystems", 00:24:06.276 "params": { 00:24:06.276 "max_subsystems": 1024 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "nvmf_set_crdt", 00:24:06.276 "params": { 00:24:06.276 "crdt1": 0, 00:24:06.276 "crdt2": 0, 00:24:06.276 "crdt3": 0 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "nvmf_create_transport", 00:24:06.276 "params": { 00:24:06.276 "trtype": "TCP", 00:24:06.276 "max_queue_depth": 128, 00:24:06.276 "max_io_qpairs_per_ctrlr": 127, 00:24:06.276 "in_capsule_data_size": 4096, 00:24:06.276 "max_io_size": 131072, 00:24:06.276 "io_unit_size": 131072, 00:24:06.276 "max_aq_depth": 128, 00:24:06.276 "num_shared_buffers": 511, 00:24:06.276 "buf_cache_size": 4294967295, 00:24:06.276 "dif_insert_or_strip": false, 00:24:06.276 "zcopy": false, 00:24:06.276 "c2h_success": false, 00:24:06.276 "sock_priority": 0, 00:24:06.276 "abort_timeout_sec": 1, 00:24:06.276 "ack_timeout": 0, 00:24:06.276 "data_wr_pool_size": 0 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "nvmf_create_subsystem", 00:24:06.276 "params": { 00:24:06.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.276 "allow_any_host": false, 00:24:06.276 "serial_number": "00000000000000000000", 00:24:06.276 "model_number": "SPDK bdev Controller", 00:24:06.276 "max_namespaces": 32, 00:24:06.276 "min_cntlid": 1, 00:24:06.276 "max_cntlid": 65519, 00:24:06.276 "ana_reporting": false 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "nvmf_subsystem_add_host", 00:24:06.276 "params": { 00:24:06.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.276 "host": "nqn.2016-06.io.spdk:host1", 00:24:06.276 "psk": "key0" 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "nvmf_subsystem_add_ns", 00:24:06.276 "params": { 00:24:06.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.276 "namespace": { 00:24:06.276 "nsid": 1, 00:24:06.276 "bdev_name": "malloc0", 00:24:06.276 "nguid": "BA7DBD6BFD024D64A22260D38994E994", 00:24:06.276 "uuid": "ba7dbd6b-fd02-4d64-a222-60d38994e994", 00:24:06.276 "no_auto_visible": false 00:24:06.276 } 00:24:06.276 } 00:24:06.276 }, 00:24:06.276 { 00:24:06.276 "method": "nvmf_subsystem_add_listener", 00:24:06.276 "params": { 00:24:06.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.276 "listen_address": { 00:24:06.276 "trtype": "TCP", 00:24:06.276 "adrfam": "IPv4", 00:24:06.276 "traddr": "10.0.0.2", 00:24:06.276 "trsvcid": "4420" 00:24:06.276 }, 00:24:06.276 "secure_channel": true 00:24:06.276 } 00:24:06.276 } 00:24:06.276 ] 00:24:06.276 } 00:24:06.276 ] 00:24:06.276 }' 00:24:06.276 12:13:56 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:06.536 12:13:56 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:06.536 "subsystems": [ 00:24:06.536 { 00:24:06.536 "subsystem": "keyring", 00:24:06.536 "config": [ 00:24:06.536 { 00:24:06.536 "method": "keyring_file_add_key", 00:24:06.536 "params": { 00:24:06.536 "name": "key0", 00:24:06.536 "path": "/tmp/tmp.Lplbe1aQQ2" 00:24:06.536 } 00:24:06.536 } 00:24:06.536 ] 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "subsystem": "iobuf", 00:24:06.536 "config": [ 00:24:06.536 { 00:24:06.536 "method": "iobuf_set_options", 00:24:06.536 "params": { 00:24:06.536 "small_pool_count": 8192, 00:24:06.536 "large_pool_count": 1024, 00:24:06.536 "small_bufsize": 8192, 00:24:06.536 "large_bufsize": 135168 00:24:06.536 } 00:24:06.536 } 00:24:06.536 ] 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "subsystem": "sock", 00:24:06.536 "config": [ 00:24:06.536 { 00:24:06.536 "method": "sock_set_default_impl", 00:24:06.536 "params": { 00:24:06.536 "impl_name": "posix" 00:24:06.536 } 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "method": "sock_impl_set_options", 00:24:06.536 "params": { 00:24:06.536 "impl_name": "ssl", 00:24:06.536 "recv_buf_size": 4096, 00:24:06.536 "send_buf_size": 4096, 00:24:06.536 "enable_recv_pipe": true, 00:24:06.536 "enable_quickack": false, 00:24:06.536 "enable_placement_id": 0, 00:24:06.536 "enable_zerocopy_send_server": true, 00:24:06.536 "enable_zerocopy_send_client": false, 00:24:06.536 "zerocopy_threshold": 0, 00:24:06.536 "tls_version": 0, 00:24:06.536 "enable_ktls": false 00:24:06.536 } 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "method": "sock_impl_set_options", 00:24:06.536 "params": { 00:24:06.536 "impl_name": "posix", 00:24:06.536 "recv_buf_size": 2097152, 00:24:06.536 "send_buf_size": 2097152, 00:24:06.536 "enable_recv_pipe": true, 00:24:06.536 "enable_quickack": false, 00:24:06.536 "enable_placement_id": 0, 00:24:06.536 "enable_zerocopy_send_server": true, 00:24:06.536 "enable_zerocopy_send_client": false, 00:24:06.536 "zerocopy_threshold": 0, 00:24:06.536 "tls_version": 0, 00:24:06.536 "enable_ktls": false 00:24:06.536 } 00:24:06.536 } 00:24:06.536 ] 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "subsystem": "vmd", 00:24:06.536 "config": [] 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "subsystem": "accel", 00:24:06.536 "config": [ 00:24:06.536 { 00:24:06.536 "method": "accel_set_options", 00:24:06.536 "params": { 00:24:06.536 "small_cache_size": 128, 00:24:06.536 "large_cache_size": 16, 00:24:06.536 "task_count": 2048, 00:24:06.536 "sequence_count": 2048, 00:24:06.536 "buf_count": 2048 00:24:06.536 } 00:24:06.536 } 00:24:06.536 ] 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "subsystem": "bdev", 00:24:06.536 "config": [ 00:24:06.536 { 00:24:06.536 "method": "bdev_set_options", 00:24:06.536 "params": { 00:24:06.536 "bdev_io_pool_size": 65535, 00:24:06.536 "bdev_io_cache_size": 256, 00:24:06.536 "bdev_auto_examine": true, 00:24:06.536 "iobuf_small_cache_size": 128, 00:24:06.536 "iobuf_large_cache_size": 16 00:24:06.536 } 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "method": "bdev_raid_set_options", 00:24:06.536 "params": { 00:24:06.536 "process_window_size_kb": 1024 00:24:06.536 } 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "method": "bdev_iscsi_set_options", 00:24:06.536 "params": { 00:24:06.536 "timeout_sec": 30 00:24:06.536 } 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "method": "bdev_nvme_set_options", 00:24:06.536 "params": { 00:24:06.536 "action_on_timeout": "none", 00:24:06.536 "timeout_us": 0, 00:24:06.536 "timeout_admin_us": 0, 00:24:06.536 "keep_alive_timeout_ms": 10000, 00:24:06.536 "arbitration_burst": 0, 00:24:06.536 "low_priority_weight": 0, 00:24:06.536 "medium_priority_weight": 0, 00:24:06.536 "high_priority_weight": 0, 00:24:06.536 "nvme_adminq_poll_period_us": 10000, 00:24:06.536 "nvme_ioq_poll_period_us": 0, 00:24:06.536 "io_queue_requests": 512, 00:24:06.536 "delay_cmd_submit": true, 00:24:06.536 "transport_retry_count": 4, 00:24:06.536 "bdev_retry_count": 3, 00:24:06.536 "transport_ack_timeout": 0, 00:24:06.536 "ctrlr_loss_timeout_sec": 0, 00:24:06.536 "reconnect_delay_sec": 0, 00:24:06.536 "fast_io_fail_timeout_sec": 0, 00:24:06.536 "disable_auto_failback": false, 00:24:06.536 "generate_uuids": false, 00:24:06.536 "transport_tos": 0, 00:24:06.536 "nvme_error_stat": false, 00:24:06.536 "rdma_srq_size": 0, 00:24:06.536 "io_path_stat": false, 00:24:06.536 "allow_accel_sequence": false, 00:24:06.536 "rdma_max_cq_size": 0, 00:24:06.536 "rdma_cm_event_timeout_ms": 0, 00:24:06.536 "dhchap_digests": [ 00:24:06.536 "sha256", 00:24:06.536 "sha384", 00:24:06.536 "sha512" 00:24:06.536 ], 00:24:06.536 "dhchap_dhgroups": [ 00:24:06.536 "null", 00:24:06.536 "ffdhe2048", 00:24:06.536 "ffdhe3072", 00:24:06.536 "ffdhe4096", 00:24:06.536 "ffdhe6144", 00:24:06.536 "ffdhe8192" 00:24:06.536 ] 00:24:06.536 } 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "method": "bdev_nvme_attach_controller", 00:24:06.536 "params": { 00:24:06.536 "name": "nvme0", 00:24:06.536 "trtype": "TCP", 00:24:06.536 "adrfam": "IPv4", 00:24:06.536 "traddr": "10.0.0.2", 00:24:06.536 "trsvcid": "4420", 00:24:06.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.536 "prchk_reftag": false, 00:24:06.536 "prchk_guard": false, 00:24:06.536 "ctrlr_loss_timeout_sec": 0, 00:24:06.536 "reconnect_delay_sec": 0, 00:24:06.536 "fast_io_fail_timeout_sec": 0, 00:24:06.536 "psk": "key0", 00:24:06.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.536 "hdgst": false, 00:24:06.536 "ddgst": false 00:24:06.536 } 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "method": "bdev_nvme_set_hotplug", 00:24:06.536 "params": { 00:24:06.536 "period_us": 100000, 00:24:06.536 "enable": false 00:24:06.536 } 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "method": "bdev_enable_histogram", 00:24:06.536 "params": { 00:24:06.536 "name": "nvme0n1", 00:24:06.536 "enable": true 00:24:06.536 } 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "method": "bdev_wait_for_examine" 00:24:06.536 } 00:24:06.536 ] 00:24:06.536 }, 00:24:06.536 { 00:24:06.536 "subsystem": "nbd", 00:24:06.536 "config": [] 00:24:06.536 } 00:24:06.536 ] 00:24:06.536 }' 00:24:06.536 12:13:56 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1195795 00:24:06.536 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1195795 ']' 00:24:06.536 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1195795 00:24:06.536 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:06.537 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.537 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195795 00:24:06.537 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:06.537 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:06.537 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195795' 00:24:06.537 killing process with pid 1195795 00:24:06.537 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1195795 00:24:06.537 Received shutdown signal, test time was about 1.000000 seconds 00:24:06.537 00:24:06.537 Latency(us) 00:24:06.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.537 =================================================================================================================== 00:24:06.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.537 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1195795 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1195659 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1195659 ']' 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1195659 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1195659 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1195659' 00:24:06.829 killing process with pid 1195659 00:24:06.829 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1195659 00:24:06.830 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1195659 00:24:06.830 12:13:56 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:06.830 12:13:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:06.830 12:13:56 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:06.830 "subsystems": [ 00:24:06.830 { 00:24:06.830 "subsystem": "keyring", 00:24:06.830 "config": [ 00:24:06.830 { 00:24:06.830 "method": "keyring_file_add_key", 00:24:06.830 "params": { 00:24:06.830 "name": "key0", 00:24:06.830 "path": "/tmp/tmp.Lplbe1aQQ2" 00:24:06.830 } 00:24:06.830 } 00:24:06.830 ] 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "subsystem": "iobuf", 00:24:06.830 "config": [ 00:24:06.830 { 00:24:06.830 "method": "iobuf_set_options", 00:24:06.830 "params": { 00:24:06.830 "small_pool_count": 8192, 00:24:06.830 "large_pool_count": 1024, 00:24:06.830 "small_bufsize": 8192, 00:24:06.830 "large_bufsize": 135168 00:24:06.830 } 00:24:06.830 } 00:24:06.830 ] 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "subsystem": "sock", 00:24:06.830 "config": [ 00:24:06.830 { 00:24:06.830 "method": "sock_set_default_impl", 00:24:06.830 "params": { 00:24:06.830 "impl_name": "posix" 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "sock_impl_set_options", 00:24:06.830 "params": { 00:24:06.830 "impl_name": "ssl", 00:24:06.830 "recv_buf_size": 4096, 00:24:06.830 "send_buf_size": 4096, 00:24:06.830 "enable_recv_pipe": true, 00:24:06.830 "enable_quickack": false, 00:24:06.830 "enable_placement_id": 0, 00:24:06.830 "enable_zerocopy_send_server": true, 00:24:06.830 "enable_zerocopy_send_client": false, 00:24:06.830 "zerocopy_threshold": 0, 00:24:06.830 "tls_version": 0, 00:24:06.830 "enable_ktls": false 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "sock_impl_set_options", 00:24:06.830 "params": { 00:24:06.830 "impl_name": "posix", 00:24:06.830 "recv_buf_size": 2097152, 00:24:06.830 "send_buf_size": 2097152, 00:24:06.830 "enable_recv_pipe": true, 00:24:06.830 "enable_quickack": false, 00:24:06.830 "enable_placement_id": 0, 00:24:06.830 "enable_zerocopy_send_server": true, 00:24:06.830 "enable_zerocopy_send_client": false, 00:24:06.830 "zerocopy_threshold": 0, 00:24:06.830 "tls_version": 0, 00:24:06.830 "enable_ktls": false 00:24:06.830 } 00:24:06.830 } 00:24:06.830 ] 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "subsystem": "vmd", 00:24:06.830 "config": [] 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "subsystem": "accel", 00:24:06.830 "config": [ 00:24:06.830 { 00:24:06.830 "method": "accel_set_options", 00:24:06.830 "params": { 00:24:06.830 "small_cache_size": 128, 00:24:06.830 "large_cache_size": 16, 00:24:06.830 "task_count": 2048, 00:24:06.830 "sequence_count": 2048, 00:24:06.830 "buf_count": 2048 00:24:06.830 } 00:24:06.830 } 00:24:06.830 ] 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "subsystem": "bdev", 00:24:06.830 "config": [ 00:24:06.830 { 00:24:06.830 "method": "bdev_set_options", 00:24:06.830 "params": { 00:24:06.830 "bdev_io_pool_size": 65535, 00:24:06.830 "bdev_io_cache_size": 256, 00:24:06.830 "bdev_auto_examine": true, 00:24:06.830 "iobuf_small_cache_size": 128, 00:24:06.830 "iobuf_large_cache_size": 16 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "bdev_raid_set_options", 00:24:06.830 "params": { 00:24:06.830 "process_window_size_kb": 1024 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "bdev_iscsi_set_options", 00:24:06.830 "params": { 00:24:06.830 "timeout_sec": 30 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "bdev_nvme_set_options", 00:24:06.830 "params": { 00:24:06.830 "action_on_timeout": "none", 00:24:06.830 "timeout_us": 0, 00:24:06.830 "timeout_admin_us": 0, 00:24:06.830 "keep_alive_timeout_ms": 10000, 00:24:06.830 "arbitration_burst": 0, 00:24:06.830 "low_priority_weight": 0, 00:24:06.830 "medium_priority_weight": 0, 00:24:06.830 "high_priority_weight": 0, 00:24:06.830 "nvme_adminq_poll_period_us": 10000, 00:24:06.830 "nvme_ioq_poll_period_us": 0, 00:24:06.830 "io_queue_requests": 0, 00:24:06.830 "delay_cmd_submit": true, 00:24:06.830 "transport_retry_count": 4, 00:24:06.830 "bdev_retry_count": 3, 00:24:06.830 "transport_ack_timeout": 0, 00:24:06.830 "ctrlr_loss_timeout_sec": 0, 00:24:06.830 "reconnect_delay_sec": 0, 00:24:06.830 "fast_io_fail_timeout_sec": 0, 00:24:06.830 "disable_auto_failback": false, 00:24:06.830 "generate_uuids": false, 00:24:06.830 "transport_tos": 0, 00:24:06.830 "nvme_error_stat": false, 00:24:06.830 "rdma_srq_size": 0, 00:24:06.830 "io_path_stat": false, 00:24:06.830 "allow_accel_sequence": false, 00:24:06.830 "rdma_max_cq_size": 0, 00:24:06.830 "rdma_cm_event_timeout_ms": 0, 00:24:06.830 "dhchap_digests": [ 00:24:06.830 "sha256", 00:24:06.830 "sha384", 00:24:06.830 "sha512" 00:24:06.830 ], 00:24:06.830 "dhchap_dhgroups": [ 00:24:06.830 "null", 00:24:06.830 "ffdhe2048", 00:24:06.830 "ffdhe3072", 00:24:06.830 "ffdhe4096", 00:24:06.830 "ffdhe6144", 00:24:06.830 "ffdhe8192" 00:24:06.830 ] 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "bdev_nvme_set_hotplug", 00:24:06.830 "params": { 00:24:06.830 "period_us": 100000, 00:24:06.830 "enable": false 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "bdev_malloc_create", 00:24:06.830 "params": { 00:24:06.830 "name": "malloc0", 00:24:06.830 "num_blocks": 8192, 00:24:06.830 "block_size": 4096, 00:24:06.830 "physical_block_size": 4096, 00:24:06.830 "uuid": "ba7dbd6b-fd02-4d64-a222-60d38994e994", 00:24:06.830 "optimal_io_boundary": 0 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "bdev_wait_for_examine" 00:24:06.830 } 00:24:06.830 ] 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "subsystem": "nbd", 00:24:06.830 "config": [] 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "subsystem": "scheduler", 00:24:06.830 "config": [ 00:24:06.830 { 00:24:06.830 "method": "framework_set_scheduler", 00:24:06.830 "params": { 00:24:06.830 "name": "static" 00:24:06.830 } 00:24:06.830 } 00:24:06.830 ] 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "subsystem": "nvmf", 00:24:06.830 "config": [ 00:24:06.830 { 00:24:06.830 "method": "nvmf_set_config", 00:24:06.830 "params": { 00:24:06.830 "discovery_filter": "match_any", 00:24:06.830 "admin_cmd_passthru": { 00:24:06.830 "identify_ctrlr": false 00:24:06.830 } 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "nvmf_set_max_subsystems", 00:24:06.830 "params": { 00:24:06.830 "max_subsystems": 1024 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "nvmf_set_crdt", 00:24:06.830 "params": { 00:24:06.830 "crdt1": 0, 00:24:06.830 "crdt2": 0, 00:24:06.830 "crdt3": 0 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "nvmf_create_transport", 00:24:06.830 "params": { 00:24:06.830 "trtype": "TCP", 00:24:06.830 "max_queue_depth": 128, 00:24:06.830 "max_io_qpairs_per_ctrlr": 127, 00:24:06.830 "in_capsule_data_size": 4096, 00:24:06.830 "max_io_size": 131072, 00:24:06.830 "io_unit_size": 131072, 00:24:06.830 "max_aq_depth": 128, 00:24:06.830 "num_shared_buffers": 511, 00:24:06.830 "buf_cache_size": 4294967295, 00:24:06.830 "dif_insert_or_strip": false, 00:24:06.830 "zcopy": false, 00:24:06.830 "c2h_success": false, 00:24:06.830 "sock_priority": 0, 00:24:06.830 "abort_timeout_sec": 1, 00:24:06.830 "ack_timeout": 0, 00:24:06.830 "data_wr_pool_size": 0 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "nvmf_create_subsystem", 00:24:06.830 "params": { 00:24:06.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.830 "allow_any_host": false, 00:24:06.830 "serial_number": "00000000000000000000", 00:24:06.830 "model_number": "SPDK bdev Controller", 00:24:06.830 "max_namespaces": 32, 00:24:06.830 "min_cntlid": 1, 00:24:06.830 "max_cntlid": 65519, 00:24:06.830 "ana_reporting": false 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "nvmf_subsystem_add_host", 00:24:06.830 "params": { 00:24:06.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.830 "host": "nqn.2016-06.io.spdk:host1", 00:24:06.830 "psk": "key0" 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "nvmf_subsystem_add_ns", 00:24:06.830 "params": { 00:24:06.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.830 "namespace": { 00:24:06.830 "nsid": 1, 00:24:06.830 "bdev_name": "malloc0", 00:24:06.830 "nguid": "BA7DBD6BFD024D64A22260D38994E994", 00:24:06.830 "uuid": "ba7dbd6b-fd02-4d64-a222-60d38994e994", 00:24:06.830 "no_auto_visible": false 00:24:06.830 } 00:24:06.830 } 00:24:06.830 }, 00:24:06.830 { 00:24:06.830 "method": "nvmf_subsystem_add_listener", 00:24:06.830 "params": { 00:24:06.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.830 "listen_address": { 00:24:06.830 "trtype": "TCP", 00:24:06.830 "adrfam": "IPv4", 00:24:06.830 "traddr": "10.0.0.2", 00:24:06.830 "trsvcid": "4420" 00:24:06.830 }, 00:24:06.830 "secure_channel": true 00:24:06.830 } 00:24:06.831 } 00:24:06.831 ] 00:24:06.831 } 00:24:06.831 ] 00:24:06.831 }' 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1196188 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1196188 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1196188 ']' 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.831 12:13:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.089 [2024-07-15 12:13:56.834177] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:24:07.089 [2024-07-15 12:13:56.834231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.089 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.089 [2024-07-15 12:13:56.907191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.089 [2024-07-15 12:13:56.945741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.089 [2024-07-15 12:13:56.945781] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.089 [2024-07-15 12:13:56.945789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.090 [2024-07-15 12:13:56.945796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.090 [2024-07-15 12:13:56.945801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.090 [2024-07-15 12:13:56.945855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.348 [2024-07-15 12:13:57.150999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.348 [2024-07-15 12:13:57.183033] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.348 [2024-07-15 12:13:57.190446] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1196302 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1196302 /var/tmp/bdevperf.sock 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1196302 ']' 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.916 12:13:57 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:07.916 "subsystems": [ 00:24:07.916 { 00:24:07.916 "subsystem": "keyring", 00:24:07.916 "config": [ 00:24:07.916 { 00:24:07.916 "method": "keyring_file_add_key", 00:24:07.916 "params": { 00:24:07.916 "name": "key0", 00:24:07.916 "path": "/tmp/tmp.Lplbe1aQQ2" 00:24:07.916 } 00:24:07.916 } 00:24:07.916 ] 00:24:07.916 }, 00:24:07.916 { 00:24:07.916 "subsystem": "iobuf", 00:24:07.916 "config": [ 00:24:07.917 { 00:24:07.917 "method": "iobuf_set_options", 00:24:07.917 "params": { 00:24:07.917 "small_pool_count": 8192, 00:24:07.917 "large_pool_count": 1024, 00:24:07.917 "small_bufsize": 8192, 00:24:07.917 "large_bufsize": 135168 00:24:07.917 } 00:24:07.917 } 00:24:07.917 ] 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "subsystem": "sock", 00:24:07.917 "config": [ 00:24:07.917 { 00:24:07.917 "method": "sock_set_default_impl", 00:24:07.917 "params": { 00:24:07.917 "impl_name": "posix" 00:24:07.917 } 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "method": "sock_impl_set_options", 00:24:07.917 "params": { 00:24:07.917 "impl_name": "ssl", 00:24:07.917 "recv_buf_size": 4096, 00:24:07.917 "send_buf_size": 4096, 00:24:07.917 "enable_recv_pipe": true, 00:24:07.917 "enable_quickack": false, 00:24:07.917 "enable_placement_id": 0, 00:24:07.917 "enable_zerocopy_send_server": true, 00:24:07.917 "enable_zerocopy_send_client": false, 00:24:07.917 "zerocopy_threshold": 0, 00:24:07.917 "tls_version": 0, 00:24:07.917 "enable_ktls": false 00:24:07.917 } 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "method": "sock_impl_set_options", 00:24:07.917 "params": { 00:24:07.917 "impl_name": "posix", 00:24:07.917 "recv_buf_size": 2097152, 00:24:07.917 "send_buf_size": 2097152, 00:24:07.917 "enable_recv_pipe": true, 00:24:07.917 "enable_quickack": false, 00:24:07.917 "enable_placement_id": 0, 00:24:07.917 "enable_zerocopy_send_server": true, 00:24:07.917 "enable_zerocopy_send_client": false, 00:24:07.917 "zerocopy_threshold": 0, 00:24:07.917 "tls_version": 0, 00:24:07.917 "enable_ktls": false 00:24:07.917 } 00:24:07.917 } 00:24:07.917 ] 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "subsystem": "vmd", 00:24:07.917 "config": [] 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "subsystem": "accel", 00:24:07.917 "config": [ 00:24:07.917 { 00:24:07.917 "method": "accel_set_options", 00:24:07.917 "params": { 00:24:07.917 "small_cache_size": 128, 00:24:07.917 "large_cache_size": 16, 00:24:07.917 "task_count": 2048, 00:24:07.917 "sequence_count": 2048, 00:24:07.917 "buf_count": 2048 00:24:07.917 } 00:24:07.917 } 00:24:07.917 ] 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "subsystem": "bdev", 00:24:07.917 "config": [ 00:24:07.917 { 00:24:07.917 "method": "bdev_set_options", 00:24:07.917 "params": { 00:24:07.917 "bdev_io_pool_size": 65535, 00:24:07.917 "bdev_io_cache_size": 256, 00:24:07.917 "bdev_auto_examine": true, 00:24:07.917 "iobuf_small_cache_size": 128, 00:24:07.917 "iobuf_large_cache_size": 16 00:24:07.917 } 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "method": "bdev_raid_set_options", 00:24:07.917 "params": { 00:24:07.917 "process_window_size_kb": 1024 00:24:07.917 } 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "method": "bdev_iscsi_set_options", 00:24:07.917 "params": { 00:24:07.917 "timeout_sec": 30 00:24:07.917 } 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "method": "bdev_nvme_set_options", 00:24:07.917 "params": { 00:24:07.917 "action_on_timeout": "none", 00:24:07.917 "timeout_us": 0, 00:24:07.917 "timeout_admin_us": 0, 00:24:07.917 "keep_alive_timeout_ms": 10000, 00:24:07.917 "arbitration_burst": 0, 00:24:07.917 "low_priority_weight": 0, 00:24:07.917 "medium_priority_weight": 0, 00:24:07.917 "high_priority_weight": 0, 00:24:07.917 "nvme_adminq_poll_period_us": 10000, 00:24:07.917 "nvme_ioq_poll_period_us": 0, 00:24:07.917 "io_queue_requests": 512, 00:24:07.917 "delay_cmd_submit": true, 00:24:07.917 "transport_retry_count": 4, 00:24:07.917 "bdev_retry_count": 3, 00:24:07.917 "transport_ack_timeout": 0, 00:24:07.917 "ctrlr_loss_timeout_sec": 0, 00:24:07.917 "reconnect_delay_sec": 0, 00:24:07.917 "fast_io_fail_timeout_sec": 0, 00:24:07.917 "disable_auto_failback": false, 00:24:07.917 "generate_uuids": false, 00:24:07.917 "transport_tos": 0, 00:24:07.917 "nvme_error_stat": false, 00:24:07.917 "rdma_srq_size": 0, 00:24:07.917 "io_path_stat": false, 00:24:07.917 "allow_accel_sequence": false, 00:24:07.917 "rdma_max_cq_size": 0, 00:24:07.917 "rdma_cm_event_timeout_ms": 0, 00:24:07.917 "dhchap_digests": [ 00:24:07.917 "sha256", 00:24:07.917 "sha384", 00:24:07.917 "sha512" 00:24:07.917 ], 00:24:07.917 "dhchap_dhgroups": [ 00:24:07.917 "null", 00:24:07.917 "ffdhe2048", 00:24:07.917 "ffdhe3072", 00:24:07.917 "ffdhe4096", 00:24:07.917 "ffdhe6144", 00:24:07.917 "ffdhe8192" 00:24:07.917 ] 00:24:07.917 } 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "method": "bdev_nvme_attach_controller", 00:24:07.917 "params": { 00:24:07.917 "name": "nvme0", 00:24:07.917 "trtype": "TCP", 00:24:07.917 "adrfam": "IPv4", 00:24:07.917 "traddr": "10.0.0.2", 00:24:07.917 "trsvcid": "4420", 00:24:07.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.917 "prchk_reftag": false, 00:24:07.917 "prchk_guard": false, 00:24:07.917 "ctrlr_loss_timeout_sec": 0, 00:24:07.917 "reconnect_delay_sec": 0, 00:24:07.917 "fast_io_fail_timeout_sec": 0, 00:24:07.917 "psk": "key0", 00:24:07.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.917 "hdgst": false, 00:24:07.917 "ddgst": false 00:24:07.917 } 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "method": "bdev_nvme_set_hotplug", 00:24:07.917 "params": { 00:24:07.917 "period_us": 100000, 00:24:07.917 "enable": false 00:24:07.917 } 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "method": "bdev_enable_histogram", 00:24:07.917 "params": { 00:24:07.917 "name": "nvme0n1", 00:24:07.917 "enable": true 00:24:07.917 } 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "method": "bdev_wait_for_examine" 00:24:07.917 } 00:24:07.917 ] 00:24:07.917 }, 00:24:07.917 { 00:24:07.917 "subsystem": "nbd", 00:24:07.917 "config": [] 00:24:07.917 } 00:24:07.917 ] 00:24:07.917 }' 00:24:07.917 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.917 12:13:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.917 [2024-07-15 12:13:57.729272] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:24:07.917 [2024-07-15 12:13:57.729321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196302 ] 00:24:07.917 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.917 [2024-07-15 12:13:57.797624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.917 [2024-07-15 12:13:57.837181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.176 [2024-07-15 12:13:57.983081] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.745 12:13:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.745 12:13:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:08.745 12:13:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:08.745 12:13:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:08.745 12:13:58 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.745 12:13:58 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:09.004 Running I/O for 1 seconds... 00:24:09.941 00:24:09.941 Latency(us) 00:24:09.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.941 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:09.941 Verification LBA range: start 0x0 length 0x2000 00:24:09.941 nvme0n1 : 1.02 5554.09 21.70 0.00 0.00 22843.77 4957.94 25986.45 00:24:09.941 =================================================================================================================== 00:24:09.941 Total : 5554.09 21.70 0.00 0.00 22843.77 4957.94 25986.45 00:24:09.941 0 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:09.941 nvmf_trace.0 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1196302 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1196302 ']' 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1196302 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.941 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1196302 00:24:10.200 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:10.200 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:10.200 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1196302' 00:24:10.200 killing process with pid 1196302 00:24:10.200 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1196302 00:24:10.200 Received shutdown signal, test time was about 1.000000 seconds 00:24:10.200 00:24:10.200 Latency(us) 00:24:10.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.200 =================================================================================================================== 00:24:10.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.200 12:13:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1196302 00:24:10.200 12:14:00 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:10.200 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:10.200 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:10.200 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:10.200 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:10.200 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:10.200 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:10.200 rmmod nvme_tcp 00:24:10.200 rmmod nvme_fabrics 00:24:10.200 rmmod nvme_keyring 00:24:10.200 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1196188 ']' 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1196188 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1196188 ']' 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1196188 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1196188 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1196188' 00:24:10.460 killing process with pid 1196188 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1196188 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1196188 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.460 12:14:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.992 12:14:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:12.992 12:14:02 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.rCSzY95PFu /tmp/tmp.hA2BlCveIa /tmp/tmp.Lplbe1aQQ2 00:24:12.992 00:24:12.992 real 1m17.164s 00:24:12.992 user 1m55.679s 00:24:12.992 sys 0m28.625s 00:24:12.992 12:14:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.992 12:14:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.992 ************************************ 00:24:12.992 END TEST nvmf_tls 00:24:12.992 ************************************ 00:24:12.992 12:14:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:12.992 12:14:02 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:12.992 12:14:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.992 12:14:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.992 12:14:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.992 ************************************ 00:24:12.992 START TEST nvmf_fips 00:24:12.992 ************************************ 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:12.992 * Looking for test storage... 00:24:12.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:12.992 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:24:12.993 Error setting digest 00:24:12.993 00A2288B5E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:12.993 00A2288B5E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.993 12:14:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:19.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:19.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.557 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:19.558 Found net devices under 0000:86:00.0: cvl_0_0 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:19.558 Found net devices under 0000:86:00.1: cvl_0_1 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:24:19.558 00:24:19.558 --- 10.0.0.2 ping statistics --- 00:24:19.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.558 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:24:19.558 00:24:19.558 --- 10.0.0.1 ping statistics --- 00:24:19.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.558 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1200310 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1200310 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1200310 ']' 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.558 12:14:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.558 [2024-07-15 12:14:08.702860] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:24:19.558 [2024-07-15 12:14:08.702907] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.558 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.558 [2024-07-15 12:14:08.775924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.558 [2024-07-15 12:14:08.817096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.558 [2024-07-15 12:14:08.817127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.558 [2024-07-15 12:14:08.817135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.558 [2024-07-15 12:14:08.817141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.558 [2024-07-15 12:14:08.817146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.558 [2024-07-15 12:14:08.817162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:19.558 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:19.818 [2024-07-15 12:14:09.691860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.818 [2024-07-15 12:14:09.707860] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.818 [2024-07-15 12:14:09.708043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.818 [2024-07-15 12:14:09.736074] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:19.818 malloc0 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1200467 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1200467 /var/tmp/bdevperf.sock 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1200467 ']' 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.818 12:14:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:20.105 [2024-07-15 12:14:09.827345] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:24:20.105 [2024-07-15 12:14:09.827394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200467 ] 00:24:20.105 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.105 [2024-07-15 12:14:09.895379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.105 [2024-07-15 12:14:09.937406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.672 12:14:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.672 12:14:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:20.672 12:14:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:20.930 [2024-07-15 12:14:10.783574] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.930 [2024-07-15 12:14:10.783651] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:20.930 TLSTESTn1 00:24:20.930 12:14:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.186 Running I/O for 10 seconds... 00:24:31.172 00:24:31.172 Latency(us) 00:24:31.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.172 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:31.172 Verification LBA range: start 0x0 length 0x2000 00:24:31.172 TLSTESTn1 : 10.03 3441.80 13.44 0.00 0.00 37129.79 5071.92 52884.70 00:24:31.172 =================================================================================================================== 00:24:31.172 Total : 3441.80 13.44 0.00 0.00 37129.79 5071.92 52884.70 00:24:31.172 0 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:31.172 nvmf_trace.0 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:31.172 12:14:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1200467 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1200467 ']' 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1200467 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1200467 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1200467' 00:24:31.173 killing process with pid 1200467 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1200467 00:24:31.173 Received shutdown signal, test time was about 10.000000 seconds 00:24:31.173 00:24:31.173 Latency(us) 00:24:31.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.173 =================================================================================================================== 00:24:31.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.173 [2024-07-15 12:14:21.159127] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:31.173 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1200467 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.431 rmmod nvme_tcp 00:24:31.431 rmmod nvme_fabrics 00:24:31.431 rmmod nvme_keyring 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1200310 ']' 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1200310 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1200310 ']' 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1200310 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.431 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1200310 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1200310' 00:24:31.690 killing process with pid 1200310 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1200310 00:24:31.690 [2024-07-15 12:14:21.436471] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1200310 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.690 12:14:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:34.224 00:24:34.224 real 0m21.109s 00:24:34.224 user 0m22.107s 00:24:34.224 sys 0m9.867s 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:34.224 ************************************ 00:24:34.224 END TEST nvmf_fips 00:24:34.224 ************************************ 00:24:34.224 12:14:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:34.224 12:14:23 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:34.224 12:14:23 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:34.224 12:14:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:34.224 12:14:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.224 12:14:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:34.224 ************************************ 00:24:34.224 START TEST nvmf_fuzz 00:24:34.224 ************************************ 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:34.224 * Looking for test storage... 00:24:34.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.224 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:34.225 12:14:23 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:39.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:39.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:39.557 Found net devices under 0000:86:00.0: cvl_0_0 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:39.557 Found net devices under 0000:86:00.1: cvl_0_1 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:39.557 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:39.558 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:39.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:24:39.821 00:24:39.821 --- 10.0.0.2 ping statistics --- 00:24:39.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.821 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:24:39.821 00:24:39.821 --- 10.0.0.1 ping statistics --- 00:24:39.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.821 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1205699 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1205699 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1205699 ']' 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.821 12:14:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.761 Malloc0 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:40.761 12:14:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:12.838 Fuzzing completed. Shutting down the fuzz application 00:25:12.838 00:25:12.838 Dumping successful admin opcodes: 00:25:12.838 8, 9, 10, 24, 00:25:12.838 Dumping successful io opcodes: 00:25:12.838 0, 9, 00:25:12.838 NS: 0x200003aeff00 I/O qp, Total commands completed: 889362, total successful commands: 5175, random_seed: 3724645056 00:25:12.838 NS: 0x200003aeff00 admin qp, Total commands completed: 90792, total successful commands: 729, random_seed: 4225818816 00:25:12.838 12:15:00 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:12.838 Fuzzing completed. Shutting down the fuzz application 00:25:12.838 00:25:12.838 Dumping successful admin opcodes: 00:25:12.838 24, 00:25:12.838 Dumping successful io opcodes: 00:25:12.838 00:25:12.838 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 154422912 00:25:12.838 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 154503790 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.838 rmmod nvme_tcp 00:25:12.838 rmmod nvme_fabrics 00:25:12.838 rmmod nvme_keyring 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1205699 ']' 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1205699 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1205699 ']' 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1205699 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1205699 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1205699' 00:25:12.838 killing process with pid 1205699 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1205699 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1205699 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.838 12:15:02 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.739 12:15:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:14.739 12:15:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:14.739 00:25:14.739 real 0m40.922s 00:25:14.739 user 0m53.050s 00:25:14.739 sys 0m17.195s 00:25:14.739 12:15:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:14.739 12:15:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:14.739 ************************************ 00:25:14.739 END TEST nvmf_fuzz 00:25:14.739 ************************************ 00:25:14.739 12:15:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:14.739 12:15:04 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:14.739 12:15:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:14.739 12:15:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:14.739 12:15:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.997 ************************************ 00:25:14.997 START TEST nvmf_multiconnection 00:25:14.997 ************************************ 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:14.997 * Looking for test storage... 00:25:14.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.997 12:15:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:14.998 12:15:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:21.564 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:21.565 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:21.565 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:21.565 Found net devices under 0000:86:00.0: cvl_0_0 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:21.565 Found net devices under 0000:86:00.1: cvl_0_1 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:21.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:25:21.565 00:25:21.565 --- 10.0.0.2 ping statistics --- 00:25:21.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.565 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:25:21.565 00:25:21.565 --- 10.0.0.1 ping statistics --- 00:25:21.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.565 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1214979 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1214979 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1214979 ']' 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.565 12:15:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.565 [2024-07-15 12:15:10.667020] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:25:21.565 [2024-07-15 12:15:10.667064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.565 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.565 [2024-07-15 12:15:10.740408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.565 [2024-07-15 12:15:10.783826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.565 [2024-07-15 12:15:10.783865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.565 [2024-07-15 12:15:10.783873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.565 [2024-07-15 12:15:10.783879] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.566 [2024-07-15 12:15:10.783885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.566 [2024-07-15 12:15:10.783949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.566 [2024-07-15 12:15:10.784058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.566 [2024-07-15 12:15:10.784164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.566 [2024-07-15 12:15:10.784165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.566 [2024-07-15 12:15:11.527368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.566 Malloc1 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.566 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.825 [2024-07-15 12:15:11.579409] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.825 Malloc2 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.825 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 Malloc3 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 Malloc4 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 Malloc5 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 Malloc6 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 Malloc7 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.826 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.086 Malloc8 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:22.086 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 Malloc9 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 Malloc10 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 Malloc11 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.087 12:15:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:23.463 12:15:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:23.463 12:15:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:23.463 12:15:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.463 12:15:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:23.463 12:15:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:25.375 12:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:25.375 12:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:25.375 12:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:25.375 12:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:25.375 12:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.375 12:15:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:25.375 12:15:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.375 12:15:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:26.749 12:15:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:26.749 12:15:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:26.749 12:15:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.749 12:15:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:26.749 12:15:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:28.651 12:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:28.651 12:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:28.651 12:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:28.651 12:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:28.651 12:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.651 12:15:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:28.651 12:15:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.651 12:15:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:30.054 12:15:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:30.054 12:15:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:30.054 12:15:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.054 12:15:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:30.054 12:15:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:31.953 12:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:31.953 12:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:31.953 12:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:31.953 12:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:31.953 12:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.953 12:15:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:31.953 12:15:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.953 12:15:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:32.887 12:15:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:32.887 12:15:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:32.887 12:15:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.887 12:15:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:32.887 12:15:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:35.415 12:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:35.415 12:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:35.415 12:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:35.415 12:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:35.415 12:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.415 12:15:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:35.415 12:15:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.415 12:15:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:36.350 12:15:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:36.350 12:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:36.350 12:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.350 12:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:36.350 12:15:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:38.251 12:15:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:38.251 12:15:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:38.251 12:15:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:38.251 12:15:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:38.251 12:15:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.251 12:15:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:38.251 12:15:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.251 12:15:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:39.625 12:15:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:39.625 12:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:39.625 12:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.625 12:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:39.625 12:15:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:41.528 12:15:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:41.528 12:15:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:41.528 12:15:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:41.528 12:15:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:41.528 12:15:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.528 12:15:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:41.528 12:15:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.528 12:15:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:42.902 12:15:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:42.902 12:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:42.902 12:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.902 12:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:42.902 12:15:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:44.798 12:15:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:44.798 12:15:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:44.798 12:15:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:44.798 12:15:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:44.798 12:15:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.798 12:15:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:44.798 12:15:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.798 12:15:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:46.172 12:15:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:46.172 12:15:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:46.172 12:15:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:46.172 12:15:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:46.172 12:15:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:48.072 12:15:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:48.072 12:15:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:48.072 12:15:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:48.072 12:15:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:48.072 12:15:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:48.072 12:15:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:48.072 12:15:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.072 12:15:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:49.444 12:15:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:49.444 12:15:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:49.444 12:15:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:49.444 12:15:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:49.444 12:15:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:51.344 12:15:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:51.344 12:15:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:51.344 12:15:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:51.344 12:15:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:51.344 12:15:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:51.344 12:15:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:51.344 12:15:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.344 12:15:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:53.272 12:15:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:53.272 12:15:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:53.272 12:15:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.272 12:15:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:53.272 12:15:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:55.170 12:15:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:55.170 12:15:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:55.170 12:15:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:55.170 12:15:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:55.170 12:15:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.170 12:15:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:55.170 12:15:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.170 12:15:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:56.545 12:15:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:56.545 12:15:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:56.545 12:15:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.545 12:15:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:56.545 12:15:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:58.446 12:15:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:58.446 12:15:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:58.446 12:15:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:58.446 12:15:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:58.446 12:15:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.446 12:15:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:58.446 12:15:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:58.446 [global] 00:25:58.446 thread=1 00:25:58.446 invalidate=1 00:25:58.446 rw=read 00:25:58.446 time_based=1 00:25:58.446 runtime=10 00:25:58.446 ioengine=libaio 00:25:58.446 direct=1 00:25:58.447 bs=262144 00:25:58.447 iodepth=64 00:25:58.447 norandommap=1 00:25:58.447 numjobs=1 00:25:58.447 00:25:58.447 [job0] 00:25:58.447 filename=/dev/nvme0n1 00:25:58.447 [job1] 00:25:58.447 filename=/dev/nvme10n1 00:25:58.447 [job2] 00:25:58.447 filename=/dev/nvme1n1 00:25:58.447 [job3] 00:25:58.447 filename=/dev/nvme2n1 00:25:58.447 [job4] 00:25:58.447 filename=/dev/nvme3n1 00:25:58.447 [job5] 00:25:58.447 filename=/dev/nvme4n1 00:25:58.447 [job6] 00:25:58.447 filename=/dev/nvme5n1 00:25:58.447 [job7] 00:25:58.447 filename=/dev/nvme6n1 00:25:58.447 [job8] 00:25:58.447 filename=/dev/nvme7n1 00:25:58.447 [job9] 00:25:58.447 filename=/dev/nvme8n1 00:25:58.447 [job10] 00:25:58.447 filename=/dev/nvme9n1 00:25:58.704 Could not set queue depth (nvme0n1) 00:25:58.704 Could not set queue depth (nvme10n1) 00:25:58.704 Could not set queue depth (nvme1n1) 00:25:58.704 Could not set queue depth (nvme2n1) 00:25:58.704 Could not set queue depth (nvme3n1) 00:25:58.704 Could not set queue depth (nvme4n1) 00:25:58.704 Could not set queue depth (nvme5n1) 00:25:58.704 Could not set queue depth (nvme6n1) 00:25:58.704 Could not set queue depth (nvme7n1) 00:25:58.704 Could not set queue depth (nvme8n1) 00:25:58.704 Could not set queue depth (nvme9n1) 00:25:58.961 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.962 fio-3.35 00:25:58.962 Starting 11 threads 00:26:11.161 00:26:11.161 job0: (groupid=0, jobs=1): err= 0: pid=1221521: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=909, BW=227MiB/s (238MB/s)(2284MiB/10048msec) 00:26:11.161 slat (usec): min=10, max=82568, avg=884.31, stdev=3026.43 00:26:11.161 clat (usec): min=696, max=202388, avg=69391.14, stdev=36801.83 00:26:11.161 lat (usec): min=722, max=245879, avg=70275.45, stdev=37233.63 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 35], 00:26:11.161 | 30.00th=[ 47], 40.00th=[ 56], 50.00th=[ 65], 60.00th=[ 78], 00:26:11.161 | 70.00th=[ 90], 80.00th=[ 101], 90.00th=[ 123], 95.00th=[ 138], 00:26:11.161 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 180], 00:26:11.161 | 99.99th=[ 203] 00:26:11.161 bw ( KiB/s): min=119808, max=339968, per=10.29%, avg=232295.65, stdev=71174.86, samples=20 00:26:11.161 iops : min= 468, max= 1328, avg=907.35, stdev=278.01, samples=20 00:26:11.161 lat (usec) : 750=0.02%, 1000=0.05% 00:26:11.161 lat (msec) : 2=0.10%, 4=0.55%, 10=2.05%, 20=4.01%, 50=26.19% 00:26:11.161 lat (msec) : 100=47.16%, 250=19.88% 00:26:11.161 cpu : usr=0.33%, sys=3.39%, ctx=1956, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=9137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job1: (groupid=0, jobs=1): err= 0: pid=1221539: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=827, BW=207MiB/s (217MB/s)(2086MiB/10089msec) 00:26:11.161 slat (usec): min=8, max=118082, avg=922.72, stdev=3695.63 00:26:11.161 clat (usec): min=1317, max=253137, avg=76340.93, stdev=43956.08 00:26:11.161 lat (usec): min=1357, max=288277, avg=77263.65, stdev=44454.75 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 19], 20.00th=[ 32], 00:26:11.161 | 30.00th=[ 49], 40.00th=[ 64], 50.00th=[ 73], 60.00th=[ 85], 00:26:11.161 | 70.00th=[ 105], 80.00th=[ 116], 90.00th=[ 132], 95.00th=[ 146], 00:26:11.161 | 99.00th=[ 188], 99.50th=[ 209], 99.90th=[ 213], 99.95th=[ 218], 00:26:11.161 | 99.99th=[ 253] 00:26:11.161 bw ( KiB/s): min=114176, max=475648, per=9.39%, avg=211976.00, stdev=101102.74, samples=20 00:26:11.161 iops : min= 446, max= 1858, avg=828.00, stdev=394.94, samples=20 00:26:11.161 lat (msec) : 2=0.02%, 4=0.10%, 10=4.61%, 20=5.64%, 50=19.98% 00:26:11.161 lat (msec) : 100=37.42%, 250=32.19%, 500=0.04% 00:26:11.161 cpu : usr=0.28%, sys=3.05%, ctx=1704, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=8345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job2: (groupid=0, jobs=1): err= 0: pid=1221557: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=750, BW=188MiB/s (197MB/s)(1886MiB/10050msec) 00:26:11.161 slat (usec): min=7, max=106086, avg=979.79, stdev=3566.98 00:26:11.161 clat (usec): min=1848, max=207355, avg=84181.03, stdev=38741.44 00:26:11.161 lat (usec): min=1865, max=274811, avg=85160.82, stdev=39271.77 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 5], 5.00th=[ 21], 10.00th=[ 30], 20.00th=[ 53], 00:26:11.161 | 30.00th=[ 65], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 94], 00:26:11.161 | 70.00th=[ 107], 80.00th=[ 116], 90.00th=[ 134], 95.00th=[ 155], 00:26:11.161 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 197], 00:26:11.161 | 99.99th=[ 207] 00:26:11.161 bw ( KiB/s): min=96256, max=283703, per=8.48%, avg=191524.85, stdev=52125.43, samples=20 00:26:11.161 iops : min= 376, max= 1108, avg=748.10, stdev=203.61, samples=20 00:26:11.161 lat (msec) : 2=0.08%, 4=0.86%, 10=0.54%, 20=3.09%, 50=13.55% 00:26:11.161 lat (msec) : 100=46.85%, 250=35.03% 00:26:11.161 cpu : usr=0.24%, sys=2.73%, ctx=1777, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=7544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job3: (groupid=0, jobs=1): err= 0: pid=1221566: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=918, BW=230MiB/s (241MB/s)(2308MiB/10054msec) 00:26:11.161 slat (usec): min=10, max=56483, avg=934.93, stdev=3069.91 00:26:11.161 clat (usec): min=756, max=178400, avg=68663.36, stdev=35705.09 00:26:11.161 lat (usec): min=808, max=179092, avg=69598.29, stdev=36225.89 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 27], 20.00th=[ 36], 00:26:11.161 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 73], 00:26:11.161 | 70.00th=[ 87], 80.00th=[ 103], 90.00th=[ 121], 95.00th=[ 136], 00:26:11.161 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 165], 99.95th=[ 169], 00:26:11.161 | 99.99th=[ 180] 00:26:11.161 bw ( KiB/s): min=118272, max=503296, per=10.39%, avg=234731.20, stdev=110543.92, samples=20 00:26:11.161 iops : min= 462, max= 1966, avg=916.90, stdev=431.82, samples=20 00:26:11.161 lat (usec) : 1000=0.01% 00:26:11.161 lat (msec) : 2=0.09%, 4=0.38%, 10=2.45%, 20=2.73%, 50=26.21% 00:26:11.161 lat (msec) : 100=46.82%, 250=21.31% 00:26:11.161 cpu : usr=0.36%, sys=3.47%, ctx=1904, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=9233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job4: (groupid=0, jobs=1): err= 0: pid=1221572: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=655, BW=164MiB/s (172MB/s)(1648MiB/10052msec) 00:26:11.161 slat (usec): min=11, max=95878, avg=1327.70, stdev=4099.01 00:26:11.161 clat (msec): min=5, max=251, avg=96.21, stdev=27.32 00:26:11.161 lat (msec): min=5, max=255, avg=97.54, stdev=27.77 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 35], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 75], 00:26:11.161 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 93], 60.00th=[ 97], 00:26:11.161 | 70.00th=[ 104], 80.00th=[ 113], 90.00th=[ 134], 95.00th=[ 153], 00:26:11.161 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 207], 99.95th=[ 218], 00:26:11.161 | 99.99th=[ 251] 00:26:11.161 bw ( KiB/s): min=101888, max=238592, per=7.40%, avg=167065.60, stdev=32378.36, samples=20 00:26:11.161 iops : min= 398, max= 932, avg=652.60, stdev=126.48, samples=20 00:26:11.161 lat (msec) : 10=0.03%, 20=0.26%, 50=1.20%, 100=63.78%, 250=34.70% 00:26:11.161 lat (msec) : 500=0.03% 00:26:11.161 cpu : usr=0.35%, sys=2.65%, ctx=1439, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=6590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job5: (groupid=0, jobs=1): err= 0: pid=1221587: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=793, BW=198MiB/s (208MB/s)(2001MiB/10089msec) 00:26:11.161 slat (usec): min=7, max=147097, avg=661.30, stdev=3856.42 00:26:11.161 clat (usec): min=875, max=243927, avg=79943.17, stdev=46616.72 00:26:11.161 lat (usec): min=916, max=266458, avg=80604.47, stdev=47144.32 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 17], 20.00th=[ 32], 00:26:11.161 | 30.00th=[ 47], 40.00th=[ 65], 50.00th=[ 81], 60.00th=[ 99], 00:26:11.161 | 70.00th=[ 112], 80.00th=[ 123], 90.00th=[ 140], 95.00th=[ 153], 00:26:11.161 | 99.00th=[ 192], 99.50th=[ 201], 99.90th=[ 209], 99.95th=[ 211], 00:26:11.161 | 99.99th=[ 245] 00:26:11.161 bw ( KiB/s): min=108544, max=348672, per=9.00%, avg=203238.40, stdev=52961.56, samples=20 00:26:11.161 iops : min= 424, max= 1362, avg=793.90, stdev=206.88, samples=20 00:26:11.161 lat (usec) : 1000=0.01% 00:26:11.161 lat (msec) : 2=0.30%, 4=0.99%, 10=3.47%, 20=8.10%, 50=19.31% 00:26:11.161 lat (msec) : 100=29.27%, 250=38.55% 00:26:11.161 cpu : usr=0.36%, sys=2.61%, ctx=2160, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=8002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job6: (groupid=0, jobs=1): err= 0: pid=1221596: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=646, BW=162MiB/s (169MB/s)(1631MiB/10093msec) 00:26:11.161 slat (usec): min=11, max=96770, avg=1142.85, stdev=4297.00 00:26:11.161 clat (usec): min=1741, max=221928, avg=97754.80, stdev=40708.82 00:26:11.161 lat (usec): min=1778, max=245503, avg=98897.65, stdev=41442.31 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 33], 20.00th=[ 67], 00:26:11.161 | 30.00th=[ 81], 40.00th=[ 94], 50.00th=[ 104], 60.00th=[ 111], 00:26:11.161 | 70.00th=[ 118], 80.00th=[ 130], 90.00th=[ 148], 95.00th=[ 161], 00:26:11.161 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 207], 99.95th=[ 213], 00:26:11.161 | 99.99th=[ 222] 00:26:11.161 bw ( KiB/s): min=98816, max=258048, per=7.33%, avg=165427.20, stdev=47775.22, samples=20 00:26:11.161 iops : min= 386, max= 1008, avg=646.20, stdev=186.62, samples=20 00:26:11.161 lat (msec) : 2=0.06%, 4=0.20%, 10=1.38%, 20=4.35%, 50=7.97% 00:26:11.161 lat (msec) : 100=31.49%, 250=54.54% 00:26:11.161 cpu : usr=0.29%, sys=2.49%, ctx=1583, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=6525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job7: (groupid=0, jobs=1): err= 0: pid=1221604: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=975, BW=244MiB/s (256MB/s)(2450MiB/10048msec) 00:26:11.161 slat (usec): min=9, max=62526, avg=767.30, stdev=2654.87 00:26:11.161 clat (usec): min=1025, max=192320, avg=64786.10, stdev=32974.20 00:26:11.161 lat (usec): min=1066, max=192350, avg=65553.40, stdev=33166.93 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 29], 20.00th=[ 39], 00:26:11.161 | 30.00th=[ 44], 40.00th=[ 52], 50.00th=[ 61], 60.00th=[ 70], 00:26:11.161 | 70.00th=[ 80], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 129], 00:26:11.161 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 190], 99.95th=[ 190], 00:26:11.161 | 99.99th=[ 192] 00:26:11.161 bw ( KiB/s): min=151552, max=435712, per=11.03%, avg=249178.95, stdev=79735.90, samples=20 00:26:11.161 iops : min= 592, max= 1702, avg=973.30, stdev=311.47, samples=20 00:26:11.161 lat (msec) : 2=0.09%, 4=0.39%, 10=1.91%, 20=2.84%, 50=33.60% 00:26:11.161 lat (msec) : 100=48.69%, 250=12.48% 00:26:11.161 cpu : usr=0.50%, sys=3.36%, ctx=2065, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=9798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job8: (groupid=0, jobs=1): err= 0: pid=1221628: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=874, BW=219MiB/s (229MB/s)(2206MiB/10088msec) 00:26:11.161 slat (usec): min=10, max=86405, avg=862.63, stdev=3178.57 00:26:11.161 clat (usec): min=765, max=221269, avg=72253.55, stdev=40673.23 00:26:11.161 lat (usec): min=799, max=232564, avg=73116.18, stdev=41169.00 00:26:11.161 clat percentiles (usec): 00:26:11.161 | 1.00th=[ 1926], 5.00th=[ 7898], 10.00th=[ 16450], 20.00th=[ 38011], 00:26:11.161 | 30.00th=[ 46924], 40.00th=[ 56886], 50.00th=[ 68682], 60.00th=[ 84411], 00:26:11.161 | 70.00th=[ 93848], 80.00th=[105382], 90.00th=[129500], 95.00th=[141558], 00:26:11.161 | 99.00th=[170918], 99.50th=[183501], 99.90th=[206570], 99.95th=[210764], 00:26:11.161 | 99.99th=[221250] 00:26:11.161 bw ( KiB/s): min=123904, max=366080, per=9.93%, avg=224251.50, stdev=69768.47, samples=20 00:26:11.161 iops : min= 484, max= 1430, avg=875.95, stdev=272.54, samples=20 00:26:11.161 lat (usec) : 1000=0.12% 00:26:11.161 lat (msec) : 2=0.94%, 4=0.93%, 10=4.68%, 20=5.06%, 50=21.29% 00:26:11.161 lat (msec) : 100=41.83%, 250=25.15% 00:26:11.161 cpu : usr=0.44%, sys=3.24%, ctx=2057, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=8822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job9: (groupid=0, jobs=1): err= 0: pid=1221639: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=757, BW=189MiB/s (199MB/s)(1895MiB/10010msec) 00:26:11.161 slat (usec): min=8, max=119924, avg=1099.53, stdev=3664.86 00:26:11.161 clat (usec): min=1060, max=213369, avg=83308.22, stdev=41202.10 00:26:11.161 lat (usec): min=1094, max=271797, avg=84407.75, stdev=41742.39 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 35], 00:26:11.161 | 30.00th=[ 60], 40.00th=[ 73], 50.00th=[ 87], 60.00th=[ 96], 00:26:11.161 | 70.00th=[ 105], 80.00th=[ 120], 90.00th=[ 138], 95.00th=[ 150], 00:26:11.161 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 197], 99.95th=[ 209], 00:26:11.161 | 99.99th=[ 213] 00:26:11.161 bw ( KiB/s): min=109056, max=481280, per=8.52%, avg=192437.65, stdev=83976.14, samples=20 00:26:11.161 iops : min= 426, max= 1880, avg=751.70, stdev=328.03, samples=20 00:26:11.161 lat (msec) : 2=0.05%, 4=0.40%, 10=0.62%, 20=2.16%, 50=20.99% 00:26:11.161 lat (msec) : 100=40.73%, 250=35.05% 00:26:11.161 cpu : usr=0.32%, sys=2.89%, ctx=1663, majf=0, minf=3348 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=7581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 job10: (groupid=0, jobs=1): err= 0: pid=1221647: Mon Jul 15 12:15:59 2024 00:26:11.161 read: IOPS=738, BW=185MiB/s (194MB/s)(1864MiB/10090msec) 00:26:11.161 slat (usec): min=8, max=45392, avg=659.61, stdev=3078.81 00:26:11.161 clat (usec): min=729, max=245197, avg=85891.86, stdev=42286.33 00:26:11.161 lat (usec): min=765, max=245220, avg=86551.47, stdev=42654.58 00:26:11.161 clat percentiles (msec): 00:26:11.161 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 26], 20.00th=[ 47], 00:26:11.161 | 30.00th=[ 64], 40.00th=[ 78], 50.00th=[ 88], 60.00th=[ 100], 00:26:11.161 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 138], 95.00th=[ 155], 00:26:11.161 | 99.00th=[ 180], 99.50th=[ 194], 99.90th=[ 245], 99.95th=[ 245], 00:26:11.161 | 99.99th=[ 245] 00:26:11.161 bw ( KiB/s): min=120320, max=293888, per=8.38%, avg=189209.60, stdev=49539.38, samples=20 00:26:11.161 iops : min= 470, max= 1148, avg=739.10, stdev=193.51, samples=20 00:26:11.161 lat (usec) : 750=0.01%, 1000=0.15% 00:26:11.161 lat (msec) : 2=0.34%, 4=0.63%, 10=2.90%, 20=3.66%, 50=14.10% 00:26:11.161 lat (msec) : 100=38.84%, 250=39.37% 00:26:11.161 cpu : usr=0.36%, sys=2.44%, ctx=2162, majf=0, minf=4097 00:26:11.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:11.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.161 issued rwts: total=7454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.161 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.161 00:26:11.161 Run status group 0 (all jobs): 00:26:11.161 READ: bw=2205MiB/s (2312MB/s), 162MiB/s-244MiB/s (169MB/s-256MB/s), io=21.7GiB (23.3GB), run=10010-10093msec 00:26:11.161 00:26:11.161 Disk stats (read/write): 00:26:11.161 nvme0n1: ios=18043/0, merge=0/0, ticks=1237086/0, in_queue=1237086, util=97.24% 00:26:11.161 nvme10n1: ios=16468/0, merge=0/0, ticks=1233268/0, in_queue=1233268, util=97.44% 00:26:11.161 nvme1n1: ios=14855/0, merge=0/0, ticks=1238801/0, in_queue=1238801, util=97.73% 00:26:11.161 nvme2n1: ios=18231/0, merge=0/0, ticks=1234162/0, in_queue=1234162, util=97.88% 00:26:11.161 nvme3n1: ios=12973/0, merge=0/0, ticks=1233963/0, in_queue=1233963, util=97.92% 00:26:11.161 nvme4n1: ios=15835/0, merge=0/0, ticks=1243601/0, in_queue=1243601, util=98.26% 00:26:11.161 nvme5n1: ios=12807/0, merge=0/0, ticks=1234651/0, in_queue=1234651, util=98.40% 00:26:11.161 nvme6n1: ios=19358/0, merge=0/0, ticks=1239321/0, in_queue=1239321, util=98.55% 00:26:11.161 nvme7n1: ios=17461/0, merge=0/0, ticks=1235744/0, in_queue=1235744, util=98.91% 00:26:11.161 nvme8n1: ios=14760/0, merge=0/0, ticks=1236143/0, in_queue=1236143, util=99.10% 00:26:11.161 nvme9n1: ios=14704/0, merge=0/0, ticks=1241560/0, in_queue=1241560, util=99.20% 00:26:11.161 12:15:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:11.161 [global] 00:26:11.161 thread=1 00:26:11.161 invalidate=1 00:26:11.161 rw=randwrite 00:26:11.161 time_based=1 00:26:11.161 runtime=10 00:26:11.161 ioengine=libaio 00:26:11.161 direct=1 00:26:11.161 bs=262144 00:26:11.161 iodepth=64 00:26:11.161 norandommap=1 00:26:11.161 numjobs=1 00:26:11.161 00:26:11.161 [job0] 00:26:11.161 filename=/dev/nvme0n1 00:26:11.161 [job1] 00:26:11.161 filename=/dev/nvme10n1 00:26:11.161 [job2] 00:26:11.161 filename=/dev/nvme1n1 00:26:11.161 [job3] 00:26:11.161 filename=/dev/nvme2n1 00:26:11.161 [job4] 00:26:11.161 filename=/dev/nvme3n1 00:26:11.161 [job5] 00:26:11.161 filename=/dev/nvme4n1 00:26:11.161 [job6] 00:26:11.161 filename=/dev/nvme5n1 00:26:11.161 [job7] 00:26:11.161 filename=/dev/nvme6n1 00:26:11.161 [job8] 00:26:11.161 filename=/dev/nvme7n1 00:26:11.161 [job9] 00:26:11.161 filename=/dev/nvme8n1 00:26:11.161 [job10] 00:26:11.161 filename=/dev/nvme9n1 00:26:11.161 Could not set queue depth (nvme0n1) 00:26:11.161 Could not set queue depth (nvme10n1) 00:26:11.161 Could not set queue depth (nvme1n1) 00:26:11.161 Could not set queue depth (nvme2n1) 00:26:11.161 Could not set queue depth (nvme3n1) 00:26:11.161 Could not set queue depth (nvme4n1) 00:26:11.161 Could not set queue depth (nvme5n1) 00:26:11.161 Could not set queue depth (nvme6n1) 00:26:11.161 Could not set queue depth (nvme7n1) 00:26:11.161 Could not set queue depth (nvme8n1) 00:26:11.161 Could not set queue depth (nvme9n1) 00:26:11.161 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.161 fio-3.35 00:26:11.161 Starting 11 threads 00:26:21.138 00:26:21.138 job0: (groupid=0, jobs=1): err= 0: pid=1223219: Mon Jul 15 12:16:10 2024 00:26:21.138 write: IOPS=639, BW=160MiB/s (168MB/s)(1611MiB/10077msec); 0 zone resets 00:26:21.138 slat (usec): min=22, max=122712, avg=1238.07, stdev=3320.22 00:26:21.138 clat (usec): min=916, max=339548, avg=98827.04, stdev=51765.03 00:26:21.138 lat (usec): min=974, max=339613, avg=100065.11, stdev=52343.59 00:26:21.138 clat percentiles (msec): 00:26:21.138 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 37], 20.00th=[ 70], 00:26:21.138 | 30.00th=[ 77], 40.00th=[ 90], 50.00th=[ 100], 60.00th=[ 105], 00:26:21.138 | 70.00th=[ 107], 80.00th=[ 127], 90.00th=[ 146], 95.00th=[ 176], 00:26:21.138 | 99.00th=[ 313], 99.50th=[ 317], 99.90th=[ 338], 99.95th=[ 338], 00:26:21.138 | 99.99th=[ 338] 00:26:21.138 bw ( KiB/s): min=67584, max=226816, per=10.43%, avg=163315.75, stdev=37884.14, samples=20 00:26:21.138 iops : min= 264, max= 886, avg=637.95, stdev=147.99, samples=20 00:26:21.138 lat (usec) : 1000=0.02% 00:26:21.138 lat (msec) : 2=0.28%, 4=1.24%, 10=3.07%, 20=1.92%, 50=5.57% 00:26:21.138 lat (msec) : 100=40.88%, 250=44.47%, 500=2.55% 00:26:21.138 cpu : usr=1.59%, sys=2.09%, ctx=2946, majf=0, minf=1 00:26:21.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:21.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.138 issued rwts: total=0,6443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.138 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.138 job1: (groupid=0, jobs=1): err= 0: pid=1223221: Mon Jul 15 12:16:10 2024 00:26:21.138 write: IOPS=602, BW=151MiB/s (158MB/s)(1517MiB/10067msec); 0 zone resets 00:26:21.138 slat (usec): min=20, max=42359, avg=1360.66, stdev=3416.05 00:26:21.138 clat (usec): min=1156, max=265004, avg=104802.37, stdev=63739.44 00:26:21.138 lat (usec): min=1219, max=265071, avg=106163.03, stdev=64673.95 00:26:21.138 clat percentiles (msec): 00:26:21.138 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 30], 20.00th=[ 44], 00:26:21.138 | 30.00th=[ 68], 40.00th=[ 78], 50.00th=[ 94], 60.00th=[ 109], 00:26:21.138 | 70.00th=[ 136], 80.00th=[ 165], 90.00th=[ 203], 95.00th=[ 224], 00:26:21.138 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 255], 00:26:21.138 | 99.99th=[ 266] 00:26:21.138 bw ( KiB/s): min=67584, max=343888, per=9.81%, avg=153668.00, stdev=77630.61, samples=20 00:26:21.138 iops : min= 264, max= 1343, avg=600.25, stdev=303.20, samples=20 00:26:21.138 lat (msec) : 2=0.13%, 4=0.86%, 10=2.93%, 20=2.93%, 50=16.66% 00:26:21.138 lat (msec) : 100=29.08%, 250=46.65%, 500=0.76% 00:26:21.138 cpu : usr=1.58%, sys=2.00%, ctx=2910, majf=0, minf=1 00:26:21.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:21.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.138 issued rwts: total=0,6067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.138 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.138 job2: (groupid=0, jobs=1): err= 0: pid=1223222: Mon Jul 15 12:16:10 2024 00:26:21.138 write: IOPS=372, BW=93.1MiB/s (97.6MB/s)(947MiB/10165msec); 0 zone resets 00:26:21.138 slat (usec): min=25, max=31392, avg=2394.69, stdev=4615.74 00:26:21.138 clat (msec): min=2, max=340, avg=169.36, stdev=38.51 00:26:21.138 lat (msec): min=2, max=340, avg=171.75, stdev=38.76 00:26:21.138 clat percentiles (msec): 00:26:21.138 | 1.00th=[ 44], 5.00th=[ 114], 10.00th=[ 126], 20.00th=[ 138], 00:26:21.138 | 30.00th=[ 153], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 178], 00:26:21.138 | 70.00th=[ 188], 80.00th=[ 201], 90.00th=[ 220], 95.00th=[ 228], 00:26:21.138 | 99.00th=[ 243], 99.50th=[ 284], 99.90th=[ 330], 99.95th=[ 342], 00:26:21.138 | 99.99th=[ 342] 00:26:21.138 bw ( KiB/s): min=69632, max=134898, per=6.08%, avg=95269.70, stdev=16628.87, samples=20 00:26:21.138 iops : min= 272, max= 526, avg=372.10, stdev=64.84, samples=20 00:26:21.138 lat (msec) : 4=0.08%, 10=0.24%, 20=0.24%, 50=0.63%, 100=1.11% 00:26:21.138 lat (msec) : 250=96.80%, 500=0.90% 00:26:21.138 cpu : usr=0.97%, sys=1.12%, ctx=1254, majf=0, minf=1 00:26:21.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:21.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.138 issued rwts: total=0,3786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.138 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.138 job3: (groupid=0, jobs=1): err= 0: pid=1223234: Mon Jul 15 12:16:10 2024 00:26:21.138 write: IOPS=502, BW=126MiB/s (132MB/s)(1277MiB/10162msec); 0 zone resets 00:26:21.138 slat (usec): min=23, max=66817, avg=1701.63, stdev=4343.04 00:26:21.138 clat (usec): min=1089, max=348202, avg=125529.88, stdev=78750.82 00:26:21.138 lat (usec): min=1127, max=348250, avg=127231.51, stdev=79923.95 00:26:21.138 clat percentiles (msec): 00:26:21.138 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 23], 20.00th=[ 41], 00:26:21.138 | 30.00th=[ 51], 40.00th=[ 101], 50.00th=[ 133], 60.00th=[ 161], 00:26:21.138 | 70.00th=[ 184], 80.00th=[ 209], 90.00th=[ 228], 95.00th=[ 245], 00:26:21.138 | 99.00th=[ 262], 99.50th=[ 279], 99.90th=[ 338], 99.95th=[ 338], 00:26:21.138 | 99.99th=[ 351] 00:26:21.138 bw ( KiB/s): min=65536, max=414208, per=8.24%, avg=129102.35, stdev=85603.00, samples=20 00:26:21.138 iops : min= 256, max= 1618, avg=504.30, stdev=334.38, samples=20 00:26:21.138 lat (msec) : 2=0.22%, 4=0.51%, 10=2.68%, 20=5.54%, 50=20.91% 00:26:21.138 lat (msec) : 100=10.53%, 250=55.89%, 500=3.72% 00:26:21.138 cpu : usr=1.45%, sys=1.44%, ctx=2522, majf=0, minf=1 00:26:21.138 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:21.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.138 issued rwts: total=0,5108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.138 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.138 job4: (groupid=0, jobs=1): err= 0: pid=1223235: Mon Jul 15 12:16:10 2024 00:26:21.138 write: IOPS=750, BW=188MiB/s (197MB/s)(1890MiB/10078msec); 0 zone resets 00:26:21.138 slat (usec): min=23, max=93370, avg=1130.61, stdev=2713.16 00:26:21.138 clat (usec): min=1178, max=203390, avg=84126.78, stdev=37983.05 00:26:21.138 lat (usec): min=1235, max=203443, avg=85257.39, stdev=38456.01 00:26:21.138 clat percentiles (msec): 00:26:21.139 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 41], 00:26:21.139 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 97], 60.00th=[ 103], 00:26:21.139 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 128], 95.00th=[ 140], 00:26:21.139 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 197], 99.95th=[ 201], 00:26:21.139 | 99.99th=[ 203] 00:26:21.139 bw ( KiB/s): min=132096, max=401408, per=12.26%, avg=191930.20, stdev=73337.93, samples=20 00:26:21.139 iops : min= 516, max= 1568, avg=749.70, stdev=286.48, samples=20 00:26:21.139 lat (msec) : 2=0.07%, 4=0.44%, 10=1.92%, 20=3.74%, 50=18.66% 00:26:21.139 lat (msec) : 100=31.81%, 250=43.37% 00:26:21.139 cpu : usr=1.50%, sys=2.21%, ctx=3211, majf=0, minf=1 00:26:21.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:21.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.139 issued rwts: total=0,7561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.139 job5: (groupid=0, jobs=1): err= 0: pid=1223236: Mon Jul 15 12:16:10 2024 00:26:21.139 write: IOPS=459, BW=115MiB/s (121MB/s)(1169MiB/10163msec); 0 zone resets 00:26:21.139 slat (usec): min=23, max=49809, avg=1574.93, stdev=4071.80 00:26:21.139 clat (usec): min=1033, max=365735, avg=137205.76, stdev=62634.61 00:26:21.139 lat (usec): min=1518, max=365778, avg=138780.68, stdev=63392.31 00:26:21.139 clat percentiles (msec): 00:26:21.139 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 46], 20.00th=[ 91], 00:26:21.139 | 30.00th=[ 103], 40.00th=[ 123], 50.00th=[ 140], 60.00th=[ 159], 00:26:21.139 | 70.00th=[ 178], 80.00th=[ 192], 90.00th=[ 215], 95.00th=[ 232], 00:26:21.139 | 99.00th=[ 264], 99.50th=[ 279], 99.90th=[ 355], 99.95th=[ 368], 00:26:21.139 | 99.99th=[ 368] 00:26:21.139 bw ( KiB/s): min=71680, max=185344, per=7.54%, avg=118041.60, stdev=34803.90, samples=20 00:26:21.139 iops : min= 280, max= 724, avg=461.10, stdev=135.95, samples=20 00:26:21.139 lat (msec) : 2=0.11%, 4=0.47%, 10=1.78%, 20=2.31%, 50=6.76% 00:26:21.139 lat (msec) : 100=14.23%, 250=72.04%, 500=2.31% 00:26:21.139 cpu : usr=1.10%, sys=1.61%, ctx=2449, majf=0, minf=1 00:26:21.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:21.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.139 issued rwts: total=0,4674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.139 job6: (groupid=0, jobs=1): err= 0: pid=1223237: Mon Jul 15 12:16:10 2024 00:26:21.139 write: IOPS=803, BW=201MiB/s (211MB/s)(2042MiB/10168msec); 0 zone resets 00:26:21.139 slat (usec): min=25, max=63859, avg=977.04, stdev=2420.13 00:26:21.139 clat (usec): min=1123, max=359098, avg=78627.11, stdev=44288.43 00:26:21.139 lat (usec): min=1170, max=359142, avg=79604.14, stdev=44812.31 00:26:21.139 clat percentiles (msec): 00:26:21.139 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 25], 20.00th=[ 44], 00:26:21.139 | 30.00th=[ 48], 40.00th=[ 69], 50.00th=[ 77], 60.00th=[ 80], 00:26:21.139 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 138], 95.00th=[ 157], 00:26:21.139 | 99.00th=[ 194], 99.50th=[ 247], 99.90th=[ 338], 99.95th=[ 347], 00:26:21.139 | 99.99th=[ 359] 00:26:21.139 bw ( KiB/s): min=109568, max=358912, per=13.25%, avg=207473.90, stdev=65627.67, samples=20 00:26:21.139 iops : min= 428, max= 1402, avg=810.40, stdev=256.41, samples=20 00:26:21.139 lat (msec) : 2=0.17%, 4=0.34%, 10=1.91%, 20=5.01%, 50=26.77% 00:26:21.139 lat (msec) : 100=37.81%, 250=27.52%, 500=0.47% 00:26:21.139 cpu : usr=1.79%, sys=2.23%, ctx=3777, majf=0, minf=1 00:26:21.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:21.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.139 issued rwts: total=0,8169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.139 job7: (groupid=0, jobs=1): err= 0: pid=1223238: Mon Jul 15 12:16:10 2024 00:26:21.139 write: IOPS=550, BW=138MiB/s (144MB/s)(1399MiB/10160msec); 0 zone resets 00:26:21.139 slat (usec): min=17, max=31389, avg=1648.01, stdev=3387.13 00:26:21.139 clat (msec): min=2, max=354, avg=114.48, stdev=49.57 00:26:21.139 lat (msec): min=2, max=354, avg=116.13, stdev=50.20 00:26:21.139 clat percentiles (msec): 00:26:21.139 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 58], 20.00th=[ 74], 00:26:21.139 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 106], 60.00th=[ 114], 00:26:21.139 | 70.00th=[ 134], 80.00th=[ 157], 90.00th=[ 180], 95.00th=[ 207], 00:26:21.139 | 99.00th=[ 241], 99.50th=[ 275], 99.90th=[ 342], 99.95th=[ 342], 00:26:21.139 | 99.99th=[ 355] 00:26:21.139 bw ( KiB/s): min=88064, max=285696, per=9.05%, avg=141656.95, stdev=52053.70, samples=20 00:26:21.139 iops : min= 344, max= 1116, avg=553.30, stdev=203.34, samples=20 00:26:21.139 lat (msec) : 4=0.13%, 10=0.50%, 20=1.16%, 50=6.07%, 100=28.50% 00:26:21.139 lat (msec) : 250=62.96%, 500=0.68% 00:26:21.139 cpu : usr=1.37%, sys=1.86%, ctx=1964, majf=0, minf=1 00:26:21.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:21.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.139 issued rwts: total=0,5597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.139 job8: (groupid=0, jobs=1): err= 0: pid=1223239: Mon Jul 15 12:16:10 2024 00:26:21.139 write: IOPS=419, BW=105MiB/s (110MB/s)(1061MiB/10126msec); 0 zone resets 00:26:21.139 slat (usec): min=24, max=31324, avg=1962.59, stdev=4407.14 00:26:21.139 clat (usec): min=1002, max=297255, avg=150712.56, stdev=59659.37 00:26:21.139 lat (usec): min=1060, max=308699, avg=152675.15, stdev=60669.98 00:26:21.139 clat percentiles (msec): 00:26:21.139 | 1.00th=[ 6], 5.00th=[ 25], 10.00th=[ 54], 20.00th=[ 101], 00:26:21.139 | 30.00th=[ 133], 40.00th=[ 153], 50.00th=[ 167], 60.00th=[ 178], 00:26:21.139 | 70.00th=[ 188], 80.00th=[ 201], 90.00th=[ 218], 95.00th=[ 224], 00:26:21.139 | 99.00th=[ 234], 99.50th=[ 243], 99.90th=[ 288], 99.95th=[ 296], 00:26:21.139 | 99.99th=[ 296] 00:26:21.139 bw ( KiB/s): min=71680, max=160768, per=6.83%, avg=106994.10, stdev=27827.53, samples=20 00:26:21.139 iops : min= 280, max= 628, avg=417.90, stdev=108.65, samples=20 00:26:21.139 lat (msec) : 2=0.26%, 4=0.61%, 10=0.94%, 20=2.05%, 50=5.35% 00:26:21.139 lat (msec) : 100=10.82%, 250=79.50%, 500=0.47% 00:26:21.139 cpu : usr=1.03%, sys=1.31%, ctx=2026, majf=0, minf=1 00:26:21.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:21.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.139 issued rwts: total=0,4243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.139 job9: (groupid=0, jobs=1): err= 0: pid=1223240: Mon Jul 15 12:16:10 2024 00:26:21.139 write: IOPS=405, BW=101MiB/s (106MB/s)(1027MiB/10124msec); 0 zone resets 00:26:21.139 slat (usec): min=22, max=35004, avg=2147.36, stdev=4551.87 00:26:21.139 clat (msec): min=4, max=236, avg=155.55, stdev=51.75 00:26:21.139 lat (msec): min=6, max=236, avg=157.70, stdev=52.60 00:26:21.139 clat percentiles (msec): 00:26:21.139 | 1.00th=[ 23], 5.00th=[ 63], 10.00th=[ 92], 20.00th=[ 109], 00:26:21.139 | 30.00th=[ 124], 40.00th=[ 140], 50.00th=[ 161], 60.00th=[ 180], 00:26:21.139 | 70.00th=[ 194], 80.00th=[ 205], 90.00th=[ 220], 95.00th=[ 226], 00:26:21.139 | 99.00th=[ 232], 99.50th=[ 234], 99.90th=[ 236], 99.95th=[ 236], 00:26:21.139 | 99.99th=[ 236] 00:26:21.139 bw ( KiB/s): min=71680, max=153600, per=6.61%, avg=103513.95, stdev=27892.87, samples=20 00:26:21.139 iops : min= 280, max= 600, avg=404.35, stdev=108.96, samples=20 00:26:21.139 lat (msec) : 10=0.07%, 20=0.75%, 50=2.92%, 100=8.33%, 250=87.92% 00:26:21.139 cpu : usr=1.28%, sys=1.17%, ctx=1666, majf=0, minf=1 00:26:21.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:21.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.139 issued rwts: total=0,4107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.139 job10: (groupid=0, jobs=1): err= 0: pid=1223241: Mon Jul 15 12:16:10 2024 00:26:21.139 write: IOPS=637, BW=159MiB/s (167MB/s)(1612MiB/10106msec); 0 zone resets 00:26:21.139 slat (usec): min=21, max=49220, avg=1384.93, stdev=3200.94 00:26:21.139 clat (usec): min=860, max=247300, avg=98913.98, stdev=55649.03 00:26:21.139 lat (usec): min=902, max=247353, avg=100298.92, stdev=56406.73 00:26:21.139 clat percentiles (msec): 00:26:21.139 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 39], 20.00th=[ 43], 00:26:21.139 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 93], 60.00th=[ 105], 00:26:21.139 | 70.00th=[ 109], 80.00th=[ 134], 90.00th=[ 197], 95.00th=[ 222], 00:26:21.139 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 247], 99.95th=[ 247], 00:26:21.139 | 99.99th=[ 247] 00:26:21.139 bw ( KiB/s): min=69632, max=398336, per=10.43%, avg=163391.10, stdev=89330.63, samples=20 00:26:21.139 iops : min= 272, max= 1556, avg=638.20, stdev=348.96, samples=20 00:26:21.139 lat (usec) : 1000=0.02% 00:26:21.139 lat (msec) : 2=0.02%, 4=0.11%, 10=0.76%, 20=2.26%, 50=18.99% 00:26:21.139 lat (msec) : 100=33.46%, 250=44.38% 00:26:21.139 cpu : usr=1.55%, sys=1.73%, ctx=2320, majf=0, minf=1 00:26:21.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:21.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.139 issued rwts: total=0,6446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.139 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.139 00:26:21.139 Run status group 0 (all jobs): 00:26:21.139 WRITE: bw=1529MiB/s (1604MB/s), 93.1MiB/s-201MiB/s (97.6MB/s-211MB/s), io=15.2GiB (16.3GB), run=10067-10168msec 00:26:21.139 00:26:21.139 Disk stats (read/write): 00:26:21.139 nvme0n1: ios=49/12662, merge=0/0, ticks=33/1218618, in_queue=1218651, util=97.30% 00:26:21.139 nvme10n1: ios=49/11842, merge=0/0, ticks=52/1215183, in_queue=1215235, util=97.59% 00:26:21.139 nvme1n1: ios=49/7395, merge=0/0, ticks=40/1207680, in_queue=1207720, util=97.75% 00:26:21.139 nvme2n1: ios=51/10044, merge=0/0, ticks=585/1207843, in_queue=1208428, util=100.00% 00:26:21.139 nvme3n1: ios=45/14871, merge=0/0, ticks=1158/1206796, in_queue=1207954, util=100.00% 00:26:21.139 nvme4n1: ios=47/9174, merge=0/0, ticks=1930/1207452, in_queue=1209382, util=100.00% 00:26:21.139 nvme5n1: ios=45/16172, merge=0/0, ticks=943/1211830, in_queue=1212773, util=100.00% 00:26:21.139 nvme6n1: ios=0/11031, merge=0/0, ticks=0/1206698, in_queue=1206698, util=98.39% 00:26:21.139 nvme7n1: ios=0/8229, merge=0/0, ticks=0/1210084, in_queue=1210084, util=98.76% 00:26:21.139 nvme8n1: ios=0/8011, merge=0/0, ticks=0/1212893, in_queue=1212893, util=98.91% 00:26:21.139 nvme9n1: ios=0/12689, merge=0/0, ticks=0/1212189, in_queue=1212189, util=99.05% 00:26:21.139 12:16:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:21.139 12:16:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:21.139 12:16:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:21.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:21.140 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:21.140 12:16:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:21.140 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:21.140 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:21.140 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.140 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.140 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.140 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.140 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:21.399 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.399 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:21.657 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:21.657 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:21.657 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:21.657 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:21.657 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:21.658 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:21.658 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:21.658 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:21.658 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:21.658 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.658 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.658 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.658 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.658 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:21.916 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:21.916 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:21.916 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:21.916 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:21.916 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:21.916 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:21.916 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:22.175 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:22.175 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:22.175 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.175 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.175 12:16:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.175 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.175 12:16:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:22.175 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.175 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:22.434 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.434 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:22.693 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:22.693 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:22.693 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:22.952 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.952 12:16:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:23.211 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.211 rmmod nvme_tcp 00:26:23.211 rmmod nvme_fabrics 00:26:23.211 rmmod nvme_keyring 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1214979 ']' 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1214979 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1214979 ']' 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1214979 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1214979 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1214979' 00:26:23.211 killing process with pid 1214979 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1214979 00:26:23.211 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1214979 00:26:23.778 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.778 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.778 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.778 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.778 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.778 12:16:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.778 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.778 12:16:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.684 12:16:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.684 00:26:25.684 real 1m10.903s 00:26:25.684 user 4m12.325s 00:26:25.684 sys 0m23.719s 00:26:25.684 12:16:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:25.684 12:16:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.684 ************************************ 00:26:25.684 END TEST nvmf_multiconnection 00:26:25.684 ************************************ 00:26:25.943 12:16:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:25.943 12:16:15 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:25.943 12:16:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:25.943 12:16:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.943 12:16:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.943 ************************************ 00:26:25.943 START TEST nvmf_initiator_timeout 00:26:25.943 ************************************ 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:25.943 * Looking for test storage... 00:26:25.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.943 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.944 12:16:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:32.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:32.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:32.572 Found net devices under 0000:86:00.0: cvl_0_0 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:32.572 Found net devices under 0000:86:00.1: cvl_0_1 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:32.572 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:32.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:26:32.573 00:26:32.573 --- 10.0.0.2 ping statistics --- 00:26:32.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.573 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:26:32.573 00:26:32.573 --- 10.0.0.1 ping statistics --- 00:26:32.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.573 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1228465 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1228465 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1228465 ']' 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:32.573 12:16:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.573 [2024-07-15 12:16:21.665460] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:26:32.573 [2024-07-15 12:16:21.665505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.573 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.573 [2024-07-15 12:16:21.739778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.573 [2024-07-15 12:16:21.782859] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.573 [2024-07-15 12:16:21.782897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.573 [2024-07-15 12:16:21.782905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.573 [2024-07-15 12:16:21.782911] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.573 [2024-07-15 12:16:21.782916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.573 [2024-07-15 12:16:21.782978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.573 [2024-07-15 12:16:21.783000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.573 [2024-07-15 12:16:21.783093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.573 [2024-07-15 12:16:21.783094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.573 Malloc0 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.573 Delay0 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.573 [2024-07-15 12:16:22.551478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.573 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.832 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.832 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.832 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.832 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.832 [2024-07-15 12:16:22.576364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.832 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.832 12:16:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:33.766 12:16:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:33.766 12:16:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:33.766 12:16:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:33.766 12:16:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:33.766 12:16:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:36.299 12:16:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:36.299 12:16:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:36.299 12:16:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:36.299 12:16:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:36.299 12:16:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:36.299 12:16:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:36.299 12:16:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1229158 00:26:36.299 12:16:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:36.299 12:16:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:36.299 [global] 00:26:36.299 thread=1 00:26:36.299 invalidate=1 00:26:36.299 rw=write 00:26:36.299 time_based=1 00:26:36.299 runtime=60 00:26:36.299 ioengine=libaio 00:26:36.299 direct=1 00:26:36.299 bs=4096 00:26:36.299 iodepth=1 00:26:36.299 norandommap=0 00:26:36.299 numjobs=1 00:26:36.299 00:26:36.299 verify_dump=1 00:26:36.299 verify_backlog=512 00:26:36.299 verify_state_save=0 00:26:36.299 do_verify=1 00:26:36.299 verify=crc32c-intel 00:26:36.299 [job0] 00:26:36.299 filename=/dev/nvme0n1 00:26:36.299 Could not set queue depth (nvme0n1) 00:26:36.299 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:36.299 fio-3.35 00:26:36.299 Starting 1 thread 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.832 true 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.832 true 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.832 true 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.832 true 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.832 12:16:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.167 true 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.167 true 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.167 true 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:42.167 true 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:42.167 12:16:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1229158 00:27:38.392 00:27:38.392 job0: (groupid=0, jobs=1): err= 0: pid=1229350: Mon Jul 15 12:17:26 2024 00:27:38.392 read: IOPS=91, BW=367KiB/s (376kB/s)(21.5MiB/60027msec) 00:27:38.392 slat (nsec): min=6764, max=59191, avg=8772.84, stdev=4060.87 00:27:38.392 clat (usec): min=236, max=41978, avg=3108.24, stdev=10356.43 00:27:38.392 lat (usec): min=243, max=42006, avg=3117.01, stdev=10360.26 00:27:38.392 clat percentiles (usec): 00:27:38.392 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 265], 00:27:38.392 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 277], 00:27:38.392 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[41157], 00:27:38.392 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:27:38.392 | 99.99th=[42206] 00:27:38.392 write: IOPS=93, BW=375KiB/s (384kB/s)(22.0MiB/60027msec); 0 zone resets 00:27:38.392 slat (nsec): min=9956, max=42575, avg=11225.93, stdev=1561.57 00:27:38.392 clat (usec): min=170, max=41621k, avg=7593.71, stdev=554593.18 00:27:38.392 lat (usec): min=182, max=41621k, avg=7604.94, stdev=554593.17 00:27:38.392 clat percentiles (usec): 00:27:38.392 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 00:27:38.392 | 20.00th=[ 194], 30.00th=[ 198], 40.00th=[ 200], 00:27:38.392 | 50.00th=[ 202], 60.00th=[ 206], 70.00th=[ 208], 00:27:38.392 | 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 227], 00:27:38.392 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 285], 00:27:38.392 | 99.95th=[ 330], 99.99th=[17112761] 00:27:38.392 bw ( KiB/s): min= 1640, max= 8192, per=100.00%, avg=6436.57, stdev=3007.21, samples=7 00:27:38.392 iops : min= 410, max= 2048, avg=1609.14, stdev=751.80, samples=7 00:27:38.392 lat (usec) : 250=51.31%, 500=45.19%, 750=0.05% 00:27:38.392 lat (msec) : 50=3.44%, >=2000=0.01% 00:27:38.392 cpu : usr=0.17%, sys=0.29%, ctx=11138, majf=0, minf=2 00:27:38.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.392 issued rwts: total=5505,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:38.392 00:27:38.392 Run status group 0 (all jobs): 00:27:38.392 READ: bw=367KiB/s (376kB/s), 367KiB/s-367KiB/s (376kB/s-376kB/s), io=21.5MiB (22.5MB), run=60027-60027msec 00:27:38.392 WRITE: bw=375KiB/s (384kB/s), 375KiB/s-375KiB/s (384kB/s-384kB/s), io=22.0MiB (23.1MB), run=60027-60027msec 00:27:38.392 00:27:38.392 Disk stats (read/write): 00:27:38.392 nvme0n1: ios=5600/5632, merge=0/0, ticks=16943/1085, in_queue=18028, util=99.78% 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:38.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:38.392 nvmf hotplug test: fio successful as expected 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:38.392 rmmod nvme_tcp 00:27:38.392 rmmod nvme_fabrics 00:27:38.392 rmmod nvme_keyring 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:27:38.392 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1228465 ']' 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1228465 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1228465 ']' 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1228465 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1228465 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1228465' 00:27:38.393 killing process with pid 1228465 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1228465 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1228465 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.393 12:17:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.963 12:17:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.963 00:27:38.963 real 1m13.074s 00:27:38.963 user 4m25.005s 00:27:38.963 sys 0m6.391s 00:27:38.963 12:17:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.963 12:17:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.963 ************************************ 00:27:38.963 END TEST nvmf_initiator_timeout 00:27:38.963 ************************************ 00:27:38.963 12:17:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:38.963 12:17:28 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:27:38.963 12:17:28 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:27:38.963 12:17:28 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:27:38.963 12:17:28 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.963 12:17:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:44.259 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:44.259 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:44.259 Found net devices under 0000:86:00.0: cvl_0_0 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:44.259 Found net devices under 0000:86:00.1: cvl_0_1 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:44.259 12:17:34 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:44.259 12:17:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:44.259 12:17:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.259 12:17:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.519 ************************************ 00:27:44.519 START TEST nvmf_perf_adq 00:27:44.519 ************************************ 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:44.519 * Looking for test storage... 00:27:44.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.519 12:17:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:49.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:49.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:49.793 Found net devices under 0000:86:00.0: cvl_0_0 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:49.793 Found net devices under 0000:86:00.1: cvl_0_1 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:49.793 12:17:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:51.168 12:17:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:53.071 12:17:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:58.344 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:58.344 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:58.344 Found net devices under 0000:86:00.0: cvl_0_0 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:58.344 Found net devices under 0000:86:00.1: cvl_0_1 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.344 12:17:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.344 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.344 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.344 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.344 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.344 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.344 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.344 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:27:58.344 00:27:58.344 --- 10.0.0.2 ping statistics --- 00:27:58.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.344 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:27:58.344 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:27:58.344 00:27:58.344 --- 10.0.0.1 ping statistics --- 00:27:58.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.345 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1246832 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1246832 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1246832 ']' 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.345 12:17:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.345 [2024-07-15 12:17:48.323645] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:27:58.345 [2024-07-15 12:17:48.323691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.603 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.603 [2024-07-15 12:17:48.396946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.603 [2024-07-15 12:17:48.438631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.603 [2024-07-15 12:17:48.438669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.603 [2024-07-15 12:17:48.438676] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.603 [2024-07-15 12:17:48.438683] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.603 [2024-07-15 12:17:48.438688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.603 [2024-07-15 12:17:48.438730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.603 [2024-07-15 12:17:48.438837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.603 [2024-07-15 12:17:48.438945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.603 [2024-07-15 12:17:48.438946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.171 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:59.171 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:59.172 12:17:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.172 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:59.172 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.430 [2024-07-15 12:17:49.320087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.430 Malloc1 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.430 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.431 [2024-07-15 12:17:49.363810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1247084 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:59.431 12:17:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:59.431 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.965 12:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:01.965 12:17:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.966 12:17:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.966 12:17:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.966 12:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:01.966 "tick_rate": 2300000000, 00:28:01.966 "poll_groups": [ 00:28:01.966 { 00:28:01.966 "name": "nvmf_tgt_poll_group_000", 00:28:01.966 "admin_qpairs": 1, 00:28:01.966 "io_qpairs": 1, 00:28:01.966 "current_admin_qpairs": 1, 00:28:01.966 "current_io_qpairs": 1, 00:28:01.966 "pending_bdev_io": 0, 00:28:01.966 "completed_nvme_io": 20590, 00:28:01.966 "transports": [ 00:28:01.966 { 00:28:01.966 "trtype": "TCP" 00:28:01.966 } 00:28:01.966 ] 00:28:01.966 }, 00:28:01.966 { 00:28:01.966 "name": "nvmf_tgt_poll_group_001", 00:28:01.966 "admin_qpairs": 0, 00:28:01.966 "io_qpairs": 1, 00:28:01.966 "current_admin_qpairs": 0, 00:28:01.966 "current_io_qpairs": 1, 00:28:01.966 "pending_bdev_io": 0, 00:28:01.966 "completed_nvme_io": 20882, 00:28:01.966 "transports": [ 00:28:01.966 { 00:28:01.966 "trtype": "TCP" 00:28:01.966 } 00:28:01.966 ] 00:28:01.966 }, 00:28:01.966 { 00:28:01.966 "name": "nvmf_tgt_poll_group_002", 00:28:01.966 "admin_qpairs": 0, 00:28:01.966 "io_qpairs": 1, 00:28:01.966 "current_admin_qpairs": 0, 00:28:01.966 "current_io_qpairs": 1, 00:28:01.966 "pending_bdev_io": 0, 00:28:01.966 "completed_nvme_io": 20737, 00:28:01.966 "transports": [ 00:28:01.966 { 00:28:01.966 "trtype": "TCP" 00:28:01.966 } 00:28:01.966 ] 00:28:01.966 }, 00:28:01.966 { 00:28:01.966 "name": "nvmf_tgt_poll_group_003", 00:28:01.966 "admin_qpairs": 0, 00:28:01.966 "io_qpairs": 1, 00:28:01.966 "current_admin_qpairs": 0, 00:28:01.966 "current_io_qpairs": 1, 00:28:01.966 "pending_bdev_io": 0, 00:28:01.966 "completed_nvme_io": 20737, 00:28:01.966 "transports": [ 00:28:01.966 { 00:28:01.966 "trtype": "TCP" 00:28:01.966 } 00:28:01.966 ] 00:28:01.966 } 00:28:01.966 ] 00:28:01.966 }' 00:28:01.966 12:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:01.966 12:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:01.966 12:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:01.966 12:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:01.966 12:17:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1247084 00:28:10.081 Initializing NVMe Controllers 00:28:10.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:10.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:10.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:10.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:10.082 Initialization complete. Launching workers. 00:28:10.082 ======================================================== 00:28:10.082 Latency(us) 00:28:10.082 Device Information : IOPS MiB/s Average min max 00:28:10.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10936.54 42.72 5854.12 3008.19 7445.06 00:28:10.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11096.14 43.34 5769.26 2823.38 9275.37 00:28:10.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11027.64 43.08 5804.04 3138.94 9385.56 00:28:10.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10940.14 42.73 5852.29 1675.75 9662.90 00:28:10.082 ======================================================== 00:28:10.082 Total : 44000.45 171.88 5819.72 1675.75 9662.90 00:28:10.082 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.082 rmmod nvme_tcp 00:28:10.082 rmmod nvme_fabrics 00:28:10.082 rmmod nvme_keyring 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1246832 ']' 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1246832 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1246832 ']' 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1246832 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1246832 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1246832' 00:28:10.082 killing process with pid 1246832 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1246832 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1246832 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.082 12:17:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.983 12:18:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.983 12:18:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:28:11.983 12:18:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:13.367 12:18:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:15.303 12:18:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:20.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:20.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:20.572 Found net devices under 0000:86:00.0: cvl_0_0 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:20.572 Found net devices under 0000:86:00.1: cvl_0_1 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.572 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:20.573 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.573 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.573 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:20.573 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.573 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.573 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:20.573 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:20.573 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.573 12:18:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:20.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:28:20.573 00:28:20.573 --- 10.0.0.2 ping statistics --- 00:28:20.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.573 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:20.573 00:28:20.573 --- 10.0.0.1 ping statistics --- 00:28:20.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.573 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:20.573 net.core.busy_poll = 1 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:20.573 net.core.busy_read = 1 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1251256 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1251256 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1251256 ']' 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.573 12:18:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:20.573 [2024-07-15 12:18:10.527828] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:28:20.573 [2024-07-15 12:18:10.527877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.573 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.831 [2024-07-15 12:18:10.600710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.831 [2024-07-15 12:18:10.643050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.831 [2024-07-15 12:18:10.643089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.831 [2024-07-15 12:18:10.643097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.831 [2024-07-15 12:18:10.643103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.831 [2024-07-15 12:18:10.643108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.831 [2024-07-15 12:18:10.646249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.831 [2024-07-15 12:18:10.646291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.831 [2024-07-15 12:18:10.646399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.831 [2024-07-15 12:18:10.646401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.397 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.655 [2024-07-15 12:18:11.520023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.655 Malloc1 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.655 [2024-07-15 12:18:11.571497] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1251418 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:28:21.655 12:18:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:21.655 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:28:24.188 "tick_rate": 2300000000, 00:28:24.188 "poll_groups": [ 00:28:24.188 { 00:28:24.188 "name": "nvmf_tgt_poll_group_000", 00:28:24.188 "admin_qpairs": 1, 00:28:24.188 "io_qpairs": 2, 00:28:24.188 "current_admin_qpairs": 1, 00:28:24.188 "current_io_qpairs": 2, 00:28:24.188 "pending_bdev_io": 0, 00:28:24.188 "completed_nvme_io": 28732, 00:28:24.188 "transports": [ 00:28:24.188 { 00:28:24.188 "trtype": "TCP" 00:28:24.188 } 00:28:24.188 ] 00:28:24.188 }, 00:28:24.188 { 00:28:24.188 "name": "nvmf_tgt_poll_group_001", 00:28:24.188 "admin_qpairs": 0, 00:28:24.188 "io_qpairs": 2, 00:28:24.188 "current_admin_qpairs": 0, 00:28:24.188 "current_io_qpairs": 2, 00:28:24.188 "pending_bdev_io": 0, 00:28:24.188 "completed_nvme_io": 30005, 00:28:24.188 "transports": [ 00:28:24.188 { 00:28:24.188 "trtype": "TCP" 00:28:24.188 } 00:28:24.188 ] 00:28:24.188 }, 00:28:24.188 { 00:28:24.188 "name": "nvmf_tgt_poll_group_002", 00:28:24.188 "admin_qpairs": 0, 00:28:24.188 "io_qpairs": 0, 00:28:24.188 "current_admin_qpairs": 0, 00:28:24.188 "current_io_qpairs": 0, 00:28:24.188 "pending_bdev_io": 0, 00:28:24.188 "completed_nvme_io": 0, 00:28:24.188 "transports": [ 00:28:24.188 { 00:28:24.188 "trtype": "TCP" 00:28:24.188 } 00:28:24.188 ] 00:28:24.188 }, 00:28:24.188 { 00:28:24.188 "name": "nvmf_tgt_poll_group_003", 00:28:24.188 "admin_qpairs": 0, 00:28:24.188 "io_qpairs": 0, 00:28:24.188 "current_admin_qpairs": 0, 00:28:24.188 "current_io_qpairs": 0, 00:28:24.188 "pending_bdev_io": 0, 00:28:24.188 "completed_nvme_io": 0, 00:28:24.188 "transports": [ 00:28:24.188 { 00:28:24.188 "trtype": "TCP" 00:28:24.188 } 00:28:24.188 ] 00:28:24.188 } 00:28:24.188 ] 00:28:24.188 }' 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:28:24.188 12:18:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1251418 00:28:32.305 Initializing NVMe Controllers 00:28:32.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:32.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:32.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:32.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:32.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:32.305 Initialization complete. Launching workers. 00:28:32.305 ======================================================== 00:28:32.305 Latency(us) 00:28:32.305 Device Information : IOPS MiB/s Average min max 00:28:32.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8211.75 32.08 7820.55 1383.00 53342.68 00:28:32.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8019.46 31.33 7981.96 1426.92 53202.77 00:28:32.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7550.16 29.49 8478.02 1490.41 53136.54 00:28:32.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6845.96 26.74 9351.35 1422.33 55142.55 00:28:32.305 ======================================================== 00:28:32.305 Total : 30627.33 119.64 8367.06 1383.00 55142.55 00:28:32.305 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:32.305 rmmod nvme_tcp 00:28:32.305 rmmod nvme_fabrics 00:28:32.305 rmmod nvme_keyring 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1251256 ']' 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1251256 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1251256 ']' 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1251256 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1251256 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1251256' 00:28:32.305 killing process with pid 1251256 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1251256 00:28:32.305 12:18:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1251256 00:28:32.305 12:18:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:32.305 12:18:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:32.305 12:18:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:32.305 12:18:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:32.305 12:18:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:32.305 12:18:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.305 12:18:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:32.305 12:18:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.594 12:18:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:35.594 12:18:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:35.594 00:28:35.594 real 0m50.865s 00:28:35.594 user 2m49.553s 00:28:35.594 sys 0m9.599s 00:28:35.594 12:18:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:35.594 12:18:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.594 ************************************ 00:28:35.594 END TEST nvmf_perf_adq 00:28:35.594 ************************************ 00:28:35.594 12:18:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:35.594 12:18:25 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:35.594 12:18:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:35.594 12:18:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.594 12:18:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:35.594 ************************************ 00:28:35.594 START TEST nvmf_shutdown 00:28:35.594 ************************************ 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:35.594 * Looking for test storage... 00:28:35.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:35.594 ************************************ 00:28:35.594 START TEST nvmf_shutdown_tc1 00:28:35.594 ************************************ 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:35.594 12:18:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:40.866 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:40.866 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:40.866 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.124 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:41.125 Found net devices under 0000:86:00.0: cvl_0_0 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:41.125 Found net devices under 0000:86:00.1: cvl_0_1 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.125 12:18:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:41.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:28:41.125 00:28:41.125 --- 10.0.0.2 ping statistics --- 00:28:41.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.125 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:28:41.125 00:28:41.125 --- 10.0.0.1 ping statistics --- 00:28:41.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.125 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:41.125 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1256854 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1256854 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1256854 ']' 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.384 12:18:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:41.384 [2024-07-15 12:18:31.214198] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:28:41.384 [2024-07-15 12:18:31.214247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.384 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.384 [2024-07-15 12:18:31.282946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:41.384 [2024-07-15 12:18:31.324222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.384 [2024-07-15 12:18:31.324264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.384 [2024-07-15 12:18:31.324271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.384 [2024-07-15 12:18:31.324277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.384 [2024-07-15 12:18:31.324282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.384 [2024-07-15 12:18:31.324394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.384 [2024-07-15 12:18:31.324518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.384 [2024-07-15 12:18:31.324624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.384 [2024-07-15 12:18:31.324626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:42.327 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:42.327 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:42.327 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:42.327 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:42.327 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.327 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.327 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.327 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.327 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.328 [2024-07-15 12:18:32.065607] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.328 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.328 Malloc1 00:28:42.328 [2024-07-15 12:18:32.161622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.328 Malloc2 00:28:42.328 Malloc3 00:28:42.328 Malloc4 00:28:42.328 Malloc5 00:28:42.586 Malloc6 00:28:42.586 Malloc7 00:28:42.586 Malloc8 00:28:42.586 Malloc9 00:28:42.586 Malloc10 00:28:42.586 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.586 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:42.586 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:42.586 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.845 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1257135 00:28:42.845 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1257135 /var/tmp/bdevperf.sock 00:28:42.845 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1257135 ']' 00:28:42.845 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:42.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 [2024-07-15 12:18:32.636547] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:28:42.846 [2024-07-15 12:18:32.636595] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.846 { 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme$subsystem", 00:28:42.846 "trtype": "$TEST_TRANSPORT", 00:28:42.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "$NVMF_PORT", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.846 "hdgst": ${hdgst:-false}, 00:28:42.846 "ddgst": ${ddgst:-false} 00:28:42.846 }, 00:28:42.846 "method": "bdev_nvme_attach_controller" 00:28:42.846 } 00:28:42.846 EOF 00:28:42.846 )") 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:42.846 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:42.846 12:18:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:42.846 "params": { 00:28:42.846 "name": "Nvme1", 00:28:42.846 "trtype": "tcp", 00:28:42.846 "traddr": "10.0.0.2", 00:28:42.846 "adrfam": "ipv4", 00:28:42.846 "trsvcid": "4420", 00:28:42.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.846 "hdgst": false, 00:28:42.846 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 },{ 00:28:42.847 "params": { 00:28:42.847 "name": "Nvme2", 00:28:42.847 "trtype": "tcp", 00:28:42.847 "traddr": "10.0.0.2", 00:28:42.847 "adrfam": "ipv4", 00:28:42.847 "trsvcid": "4420", 00:28:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:42.847 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:42.847 "hdgst": false, 00:28:42.847 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 },{ 00:28:42.847 "params": { 00:28:42.847 "name": "Nvme3", 00:28:42.847 "trtype": "tcp", 00:28:42.847 "traddr": "10.0.0.2", 00:28:42.847 "adrfam": "ipv4", 00:28:42.847 "trsvcid": "4420", 00:28:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:42.847 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:42.847 "hdgst": false, 00:28:42.847 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 },{ 00:28:42.847 "params": { 00:28:42.847 "name": "Nvme4", 00:28:42.847 "trtype": "tcp", 00:28:42.847 "traddr": "10.0.0.2", 00:28:42.847 "adrfam": "ipv4", 00:28:42.847 "trsvcid": "4420", 00:28:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:42.847 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:42.847 "hdgst": false, 00:28:42.847 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 },{ 00:28:42.847 "params": { 00:28:42.847 "name": "Nvme5", 00:28:42.847 "trtype": "tcp", 00:28:42.847 "traddr": "10.0.0.2", 00:28:42.847 "adrfam": "ipv4", 00:28:42.847 "trsvcid": "4420", 00:28:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:42.847 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:42.847 "hdgst": false, 00:28:42.847 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 },{ 00:28:42.847 "params": { 00:28:42.847 "name": "Nvme6", 00:28:42.847 "trtype": "tcp", 00:28:42.847 "traddr": "10.0.0.2", 00:28:42.847 "adrfam": "ipv4", 00:28:42.847 "trsvcid": "4420", 00:28:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:42.847 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:42.847 "hdgst": false, 00:28:42.847 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 },{ 00:28:42.847 "params": { 00:28:42.847 "name": "Nvme7", 00:28:42.847 "trtype": "tcp", 00:28:42.847 "traddr": "10.0.0.2", 00:28:42.847 "adrfam": "ipv4", 00:28:42.847 "trsvcid": "4420", 00:28:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:42.847 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:42.847 "hdgst": false, 00:28:42.847 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 },{ 00:28:42.847 "params": { 00:28:42.847 "name": "Nvme8", 00:28:42.847 "trtype": "tcp", 00:28:42.847 "traddr": "10.0.0.2", 00:28:42.847 "adrfam": "ipv4", 00:28:42.847 "trsvcid": "4420", 00:28:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:42.847 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:42.847 "hdgst": false, 00:28:42.847 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 },{ 00:28:42.847 "params": { 00:28:42.847 "name": "Nvme9", 00:28:42.847 "trtype": "tcp", 00:28:42.847 "traddr": "10.0.0.2", 00:28:42.847 "adrfam": "ipv4", 00:28:42.847 "trsvcid": "4420", 00:28:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:42.847 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:42.847 "hdgst": false, 00:28:42.847 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 },{ 00:28:42.847 "params": { 00:28:42.847 "name": "Nvme10", 00:28:42.847 "trtype": "tcp", 00:28:42.847 "traddr": "10.0.0.2", 00:28:42.847 "adrfam": "ipv4", 00:28:42.847 "trsvcid": "4420", 00:28:42.847 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:42.847 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:42.847 "hdgst": false, 00:28:42.847 "ddgst": false 00:28:42.847 }, 00:28:42.847 "method": "bdev_nvme_attach_controller" 00:28:42.847 }' 00:28:42.847 [2024-07-15 12:18:32.705455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.847 [2024-07-15 12:18:32.744761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.223 12:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:44.223 12:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:28:44.223 12:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:44.223 12:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.223 12:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.223 12:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.223 12:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1257135 00:28:44.223 12:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:44.224 12:18:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:45.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1257135 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1256854 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 [2024-07-15 12:18:35.062878] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:28:45.160 [2024-07-15 12:18:35.062924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257486 ] 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:45.160 { 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme$subsystem", 00:28:45.160 "trtype": "$TEST_TRANSPORT", 00:28:45.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "$NVMF_PORT", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.160 "hdgst": ${hdgst:-false}, 00:28:45.160 "ddgst": ${ddgst:-false} 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 } 00:28:45.160 EOF 00:28:45.160 )") 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:45.160 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:45.160 12:18:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme1", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 },{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme2", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 },{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme3", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 },{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme4", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 },{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme5", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 },{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme6", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 },{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme7", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 },{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme8", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 },{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme9", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 },{ 00:28:45.160 "params": { 00:28:45.160 "name": "Nvme10", 00:28:45.160 "trtype": "tcp", 00:28:45.160 "traddr": "10.0.0.2", 00:28:45.160 "adrfam": "ipv4", 00:28:45.160 "trsvcid": "4420", 00:28:45.160 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:45.160 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:45.160 "hdgst": false, 00:28:45.160 "ddgst": false 00:28:45.160 }, 00:28:45.160 "method": "bdev_nvme_attach_controller" 00:28:45.160 }' 00:28:45.160 [2024-07-15 12:18:35.130161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.418 [2024-07-15 12:18:35.169805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.793 Running I/O for 1 seconds... 00:28:47.727 00:28:47.727 Latency(us) 00:28:47.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.727 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme1n1 : 1.13 283.57 17.72 0.00 0.00 220208.80 15614.66 213362.42 00:28:47.727 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme2n1 : 1.14 281.65 17.60 0.00 0.00 222085.83 24048.86 216097.84 00:28:47.727 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme3n1 : 1.12 286.99 17.94 0.00 0.00 214726.57 15386.71 211538.81 00:28:47.727 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme4n1 : 1.12 285.20 17.82 0.00 0.00 212982.56 14303.94 218833.25 00:28:47.727 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme5n1 : 1.07 240.23 15.01 0.00 0.00 248181.76 18578.03 215186.03 00:28:47.727 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme6n1 : 1.14 279.97 17.50 0.00 0.00 210800.11 16982.37 218833.25 00:28:47.727 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme7n1 : 1.13 282.62 17.66 0.00 0.00 205518.14 14303.94 221568.67 00:28:47.727 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme8n1 : 1.14 280.89 17.56 0.00 0.00 203755.79 14474.91 216097.84 00:28:47.727 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme9n1 : 1.15 278.91 17.43 0.00 0.00 202118.90 14930.81 227039.50 00:28:47.727 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.727 Verification LBA range: start 0x0 length 0x400 00:28:47.727 Nvme10n1 : 1.19 269.61 16.85 0.00 0.00 199220.71 18236.10 244363.80 00:28:47.727 =================================================================================================================== 00:28:47.727 Total : 2769.63 173.10 0.00 0.00 213261.51 14303.94 244363.80 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:47.985 rmmod nvme_tcp 00:28:47.985 rmmod nvme_fabrics 00:28:47.985 rmmod nvme_keyring 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1256854 ']' 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1256854 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1256854 ']' 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1256854 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1256854 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1256854' 00:28:47.985 killing process with pid 1256854 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1256854 00:28:47.985 12:18:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1256854 00:28:48.553 12:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:48.553 12:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:48.553 12:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:48.553 12:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:48.553 12:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:48.553 12:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.553 12:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.553 12:18:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.456 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:50.456 00:28:50.456 real 0m15.056s 00:28:50.456 user 0m33.354s 00:28:50.456 sys 0m5.654s 00:28:50.456 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:50.456 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.456 ************************************ 00:28:50.456 END TEST nvmf_shutdown_tc1 00:28:50.456 ************************************ 00:28:50.456 12:18:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:50.716 ************************************ 00:28:50.716 START TEST nvmf_shutdown_tc2 00:28:50.716 ************************************ 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:50.716 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:50.717 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:50.717 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:50.717 Found net devices under 0000:86:00.0: cvl_0_0 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:50.717 Found net devices under 0000:86:00.1: cvl_0_1 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:50.717 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:50.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:28:50.976 00:28:50.976 --- 10.0.0.2 ping statistics --- 00:28:50.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.976 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:28:50.976 00:28:50.976 --- 10.0.0.1 ping statistics --- 00:28:50.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.976 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1258518 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1258518 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1258518 ']' 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.976 12:18:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.976 [2024-07-15 12:18:40.860390] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:28:50.976 [2024-07-15 12:18:40.860441] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.976 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.976 [2024-07-15 12:18:40.936409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:51.235 [2024-07-15 12:18:40.979647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.235 [2024-07-15 12:18:40.979684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.235 [2024-07-15 12:18:40.979692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.235 [2024-07-15 12:18:40.979698] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.235 [2024-07-15 12:18:40.979704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.235 [2024-07-15 12:18:40.979761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.235 [2024-07-15 12:18:40.979860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:51.235 [2024-07-15 12:18:40.979886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:51.235 [2024-07-15 12:18:40.979888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.804 [2024-07-15 12:18:41.705303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.804 12:18:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.804 Malloc1 00:28:51.804 [2024-07-15 12:18:41.800952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.063 Malloc2 00:28:52.063 Malloc3 00:28:52.063 Malloc4 00:28:52.063 Malloc5 00:28:52.063 Malloc6 00:28:52.063 Malloc7 00:28:52.323 Malloc8 00:28:52.323 Malloc9 00:28:52.323 Malloc10 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1258788 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1258788 /var/tmp/bdevperf.sock 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1258788 ']' 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:52.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.323 { 00:28:52.323 "params": { 00:28:52.323 "name": "Nvme$subsystem", 00:28:52.323 "trtype": "$TEST_TRANSPORT", 00:28:52.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.323 "adrfam": "ipv4", 00:28:52.323 "trsvcid": "$NVMF_PORT", 00:28:52.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.323 "hdgst": ${hdgst:-false}, 00:28:52.323 "ddgst": ${ddgst:-false} 00:28:52.323 }, 00:28:52.323 "method": "bdev_nvme_attach_controller" 00:28:52.323 } 00:28:52.323 EOF 00:28:52.323 )") 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.323 { 00:28:52.323 "params": { 00:28:52.323 "name": "Nvme$subsystem", 00:28:52.323 "trtype": "$TEST_TRANSPORT", 00:28:52.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.323 "adrfam": "ipv4", 00:28:52.323 "trsvcid": "$NVMF_PORT", 00:28:52.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.323 "hdgst": ${hdgst:-false}, 00:28:52.323 "ddgst": ${ddgst:-false} 00:28:52.323 }, 00:28:52.323 "method": "bdev_nvme_attach_controller" 00:28:52.323 } 00:28:52.323 EOF 00:28:52.323 )") 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.323 { 00:28:52.323 "params": { 00:28:52.323 "name": "Nvme$subsystem", 00:28:52.323 "trtype": "$TEST_TRANSPORT", 00:28:52.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.323 "adrfam": "ipv4", 00:28:52.323 "trsvcid": "$NVMF_PORT", 00:28:52.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.323 "hdgst": ${hdgst:-false}, 00:28:52.323 "ddgst": ${ddgst:-false} 00:28:52.323 }, 00:28:52.323 "method": "bdev_nvme_attach_controller" 00:28:52.323 } 00:28:52.323 EOF 00:28:52.323 )") 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.323 { 00:28:52.323 "params": { 00:28:52.323 "name": "Nvme$subsystem", 00:28:52.323 "trtype": "$TEST_TRANSPORT", 00:28:52.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.323 "adrfam": "ipv4", 00:28:52.323 "trsvcid": "$NVMF_PORT", 00:28:52.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.323 "hdgst": ${hdgst:-false}, 00:28:52.323 "ddgst": ${ddgst:-false} 00:28:52.323 }, 00:28:52.323 "method": "bdev_nvme_attach_controller" 00:28:52.323 } 00:28:52.323 EOF 00:28:52.323 )") 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.323 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.323 { 00:28:52.323 "params": { 00:28:52.323 "name": "Nvme$subsystem", 00:28:52.323 "trtype": "$TEST_TRANSPORT", 00:28:52.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.323 "adrfam": "ipv4", 00:28:52.323 "trsvcid": "$NVMF_PORT", 00:28:52.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.323 "hdgst": ${hdgst:-false}, 00:28:52.323 "ddgst": ${ddgst:-false} 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 } 00:28:52.324 EOF 00:28:52.324 )") 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.324 { 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme$subsystem", 00:28:52.324 "trtype": "$TEST_TRANSPORT", 00:28:52.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "$NVMF_PORT", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.324 "hdgst": ${hdgst:-false}, 00:28:52.324 "ddgst": ${ddgst:-false} 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 } 00:28:52.324 EOF 00:28:52.324 )") 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.324 { 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme$subsystem", 00:28:52.324 "trtype": "$TEST_TRANSPORT", 00:28:52.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "$NVMF_PORT", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.324 "hdgst": ${hdgst:-false}, 00:28:52.324 "ddgst": ${ddgst:-false} 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 } 00:28:52.324 EOF 00:28:52.324 )") 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.324 [2024-07-15 12:18:42.271705] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:28:52.324 [2024-07-15 12:18:42.271754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258788 ] 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.324 { 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme$subsystem", 00:28:52.324 "trtype": "$TEST_TRANSPORT", 00:28:52.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "$NVMF_PORT", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.324 "hdgst": ${hdgst:-false}, 00:28:52.324 "ddgst": ${ddgst:-false} 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 } 00:28:52.324 EOF 00:28:52.324 )") 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.324 { 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme$subsystem", 00:28:52.324 "trtype": "$TEST_TRANSPORT", 00:28:52.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "$NVMF_PORT", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.324 "hdgst": ${hdgst:-false}, 00:28:52.324 "ddgst": ${ddgst:-false} 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 } 00:28:52.324 EOF 00:28:52.324 )") 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.324 { 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme$subsystem", 00:28:52.324 "trtype": "$TEST_TRANSPORT", 00:28:52.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "$NVMF_PORT", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.324 "hdgst": ${hdgst:-false}, 00:28:52.324 "ddgst": ${ddgst:-false} 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 } 00:28:52.324 EOF 00:28:52.324 )") 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:52.324 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:52.324 12:18:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme1", 00:28:52.324 "trtype": "tcp", 00:28:52.324 "traddr": "10.0.0.2", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "4420", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:52.324 "hdgst": false, 00:28:52.324 "ddgst": false 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 },{ 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme2", 00:28:52.324 "trtype": "tcp", 00:28:52.324 "traddr": "10.0.0.2", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "4420", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:52.324 "hdgst": false, 00:28:52.324 "ddgst": false 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 },{ 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme3", 00:28:52.324 "trtype": "tcp", 00:28:52.324 "traddr": "10.0.0.2", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "4420", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:52.324 "hdgst": false, 00:28:52.324 "ddgst": false 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 },{ 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme4", 00:28:52.324 "trtype": "tcp", 00:28:52.324 "traddr": "10.0.0.2", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "4420", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:52.324 "hdgst": false, 00:28:52.324 "ddgst": false 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 },{ 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme5", 00:28:52.324 "trtype": "tcp", 00:28:52.324 "traddr": "10.0.0.2", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "4420", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:52.324 "hdgst": false, 00:28:52.324 "ddgst": false 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 },{ 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme6", 00:28:52.324 "trtype": "tcp", 00:28:52.324 "traddr": "10.0.0.2", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "4420", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:52.324 "hdgst": false, 00:28:52.324 "ddgst": false 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 },{ 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme7", 00:28:52.324 "trtype": "tcp", 00:28:52.324 "traddr": "10.0.0.2", 00:28:52.324 "adrfam": "ipv4", 00:28:52.324 "trsvcid": "4420", 00:28:52.324 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:52.324 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:52.324 "hdgst": false, 00:28:52.324 "ddgst": false 00:28:52.324 }, 00:28:52.324 "method": "bdev_nvme_attach_controller" 00:28:52.324 },{ 00:28:52.324 "params": { 00:28:52.324 "name": "Nvme8", 00:28:52.324 "trtype": "tcp", 00:28:52.324 "traddr": "10.0.0.2", 00:28:52.325 "adrfam": "ipv4", 00:28:52.325 "trsvcid": "4420", 00:28:52.325 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:52.325 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:52.325 "hdgst": false, 00:28:52.325 "ddgst": false 00:28:52.325 }, 00:28:52.325 "method": "bdev_nvme_attach_controller" 00:28:52.325 },{ 00:28:52.325 "params": { 00:28:52.325 "name": "Nvme9", 00:28:52.325 "trtype": "tcp", 00:28:52.325 "traddr": "10.0.0.2", 00:28:52.325 "adrfam": "ipv4", 00:28:52.325 "trsvcid": "4420", 00:28:52.325 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:52.325 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:52.325 "hdgst": false, 00:28:52.325 "ddgst": false 00:28:52.325 }, 00:28:52.325 "method": "bdev_nvme_attach_controller" 00:28:52.325 },{ 00:28:52.325 "params": { 00:28:52.325 "name": "Nvme10", 00:28:52.325 "trtype": "tcp", 00:28:52.325 "traddr": "10.0.0.2", 00:28:52.325 "adrfam": "ipv4", 00:28:52.325 "trsvcid": "4420", 00:28:52.325 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:52.325 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:52.325 "hdgst": false, 00:28:52.325 "ddgst": false 00:28:52.325 }, 00:28:52.325 "method": "bdev_nvme_attach_controller" 00:28:52.325 }' 00:28:52.584 [2024-07-15 12:18:42.342869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.584 [2024-07-15 12:18:42.382741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.960 Running I/O for 10 seconds... 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:54.219 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:54.478 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:54.478 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:54.478 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:54.478 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.478 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:54.478 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1258788 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1258788 ']' 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1258788 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1258788 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1258788' 00:28:54.737 killing process with pid 1258788 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1258788 00:28:54.737 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1258788 00:28:54.737 Received shutdown signal, test time was about 0.698130 seconds 00:28:54.737 00:28:54.737 Latency(us) 00:28:54.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.737 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme1n1 : 0.68 281.83 17.61 0.00 0.00 224004.60 21085.50 208803.39 00:28:54.737 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme2n1 : 0.68 283.59 17.72 0.00 0.00 217217.71 17780.20 211538.81 00:28:54.737 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme3n1 : 0.69 280.16 17.51 0.00 0.00 214107.71 18692.01 196949.93 00:28:54.737 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme4n1 : 0.69 277.12 17.32 0.00 0.00 211265.15 21313.45 219745.06 00:28:54.737 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme5n1 : 0.69 278.73 17.42 0.00 0.00 205259.46 30089.57 174154.80 00:28:54.737 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme6n1 : 0.66 194.83 12.18 0.00 0.00 284336.75 26100.42 240716.58 00:28:54.737 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme7n1 : 0.67 327.59 20.47 0.00 0.00 161870.03 8719.14 221568.67 00:28:54.737 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme8n1 : 0.67 285.06 17.82 0.00 0.00 184291.51 46502.07 165036.74 00:28:54.737 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme9n1 : 0.70 275.28 17.20 0.00 0.00 187049.48 17096.35 220656.86 00:28:54.737 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.737 Verification LBA range: start 0x0 length 0x400 00:28:54.737 Nvme10n1 : 0.69 287.09 17.94 0.00 0.00 172679.88 5271.37 184184.65 00:28:54.737 =================================================================================================================== 00:28:54.737 Total : 2771.27 173.20 0.00 0.00 202799.31 5271.37 240716.58 00:28:54.995 12:18:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1258518 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:55.931 rmmod nvme_tcp 00:28:55.931 rmmod nvme_fabrics 00:28:55.931 rmmod nvme_keyring 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1258518 ']' 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1258518 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1258518 ']' 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1258518 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:55.931 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1258518 00:28:56.190 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:56.190 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:56.190 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1258518' 00:28:56.190 killing process with pid 1258518 00:28:56.190 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1258518 00:28:56.190 12:18:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1258518 00:28:56.450 12:18:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:56.450 12:18:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:56.450 12:18:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:56.450 12:18:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:56.450 12:18:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:56.450 12:18:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.450 12:18:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:56.450 12:18:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.412 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:58.672 00:28:58.672 real 0m7.901s 00:28:58.672 user 0m23.968s 00:28:58.672 sys 0m1.217s 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.672 ************************************ 00:28:58.672 END TEST nvmf_shutdown_tc2 00:28:58.672 ************************************ 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:58.672 ************************************ 00:28:58.672 START TEST nvmf_shutdown_tc3 00:28:58.672 ************************************ 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:58.672 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:58.672 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:58.672 Found net devices under 0000:86:00.0: cvl_0_0 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:58.672 Found net devices under 0000:86:00.1: cvl_0_1 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.672 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.673 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:58.673 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:58.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:28:58.932 00:28:58.932 --- 10.0.0.2 ping statistics --- 00:28:58.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.932 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:28:58.932 00:28:58.932 --- 10.0.0.1 ping statistics --- 00:28:58.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.932 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1259959 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1259959 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1259959 ']' 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.932 12:18:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.932 [2024-07-15 12:18:48.829168] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:28:58.932 [2024-07-15 12:18:48.829211] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.932 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.932 [2024-07-15 12:18:48.900529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.190 [2024-07-15 12:18:48.942987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.190 [2024-07-15 12:18:48.943025] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.190 [2024-07-15 12:18:48.943032] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.191 [2024-07-15 12:18:48.943038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.191 [2024-07-15 12:18:48.943043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.191 [2024-07-15 12:18:48.943156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.191 [2024-07-15 12:18:48.943279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.191 [2024-07-15 12:18:48.943372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.191 [2024-07-15 12:18:48.943372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.759 [2024-07-15 12:18:49.669176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.759 12:18:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.759 Malloc1 00:29:00.019 [2024-07-15 12:18:49.765203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.019 Malloc2 00:29:00.019 Malloc3 00:29:00.019 Malloc4 00:29:00.019 Malloc5 00:29:00.019 Malloc6 00:29:00.019 Malloc7 00:29:00.278 Malloc8 00:29:00.278 Malloc9 00:29:00.278 Malloc10 00:29:00.278 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.278 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:00.278 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1260236 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1260236 /var/tmp/bdevperf.sock 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1260236 ']' 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 [2024-07-15 12:18:50.234286] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:29:00.279 [2024-07-15 12:18:50.234334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260236 ] 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.279 { 00:29:00.279 "params": { 00:29:00.279 "name": "Nvme$subsystem", 00:29:00.279 "trtype": "$TEST_TRANSPORT", 00:29:00.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.279 "adrfam": "ipv4", 00:29:00.279 "trsvcid": "$NVMF_PORT", 00:29:00.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.279 "hdgst": ${hdgst:-false}, 00:29:00.279 "ddgst": ${ddgst:-false} 00:29:00.279 }, 00:29:00.279 "method": "bdev_nvme_attach_controller" 00:29:00.279 } 00:29:00.279 EOF 00:29:00.279 )") 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:00.279 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:29:00.280 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.280 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:29:00.280 12:18:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme1", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 },{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme2", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 },{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme3", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 },{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme4", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 },{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme5", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 },{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme6", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 },{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme7", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 },{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme8", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 },{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme9", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 },{ 00:29:00.280 "params": { 00:29:00.280 "name": "Nvme10", 00:29:00.280 "trtype": "tcp", 00:29:00.280 "traddr": "10.0.0.2", 00:29:00.280 "adrfam": "ipv4", 00:29:00.280 "trsvcid": "4420", 00:29:00.280 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:00.280 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:00.280 "hdgst": false, 00:29:00.280 "ddgst": false 00:29:00.280 }, 00:29:00.280 "method": "bdev_nvme_attach_controller" 00:29:00.280 }' 00:29:00.538 [2024-07-15 12:18:50.304657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.538 [2024-07-15 12:18:50.344348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.917 Running I/O for 10 seconds... 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:01.917 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.177 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:02.177 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:02.177 12:18:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:02.436 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:02.436 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:02.436 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:02.436 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.436 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.436 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:02.436 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.437 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:02.437 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:02.437 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1259959 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1259959 ']' 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1259959 00:29:02.708 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:29:02.709 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:02.709 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1259959 00:29:02.709 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:02.709 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:02.709 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1259959' 00:29:02.709 killing process with pid 1259959 00:29:02.709 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1259959 00:29:02.709 12:18:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1259959 00:29:02.709 [2024-07-15 12:18:52.604174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.604614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef530 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.606490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1f30 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.709 [2024-07-15 12:18:52.607486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.607752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9d0 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.710 [2024-07-15 12:18:52.609432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.609530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe90 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.610853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0330 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.711 [2024-07-15 12:18:52.611596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.611918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f07d0 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612752] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.712 [2024-07-15 12:18:52.612921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.612998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.613066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f0c70 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.713 [2024-07-15 12:18:52.614337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.614343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.614349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.614355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.614361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.614367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.614373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.614379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.614385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.614390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f1130 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.615554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f15d0 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.616363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.714 [2024-07-15 12:18:52.616393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.714 [2024-07-15 12:18:52.616402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.714 [2024-07-15 12:18:52.616410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.714 [2024-07-15 12:18:52.616417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.714 [2024-07-15 12:18:52.616424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.714 [2024-07-15 12:18:52.616431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.714 [2024-07-15 12:18:52.616438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.714 [2024-07-15 12:18:52.616448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd35d70 is same with the state(5) to be set 00:29:02.714 [2024-07-15 12:18:52.616478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.714 [2024-07-15 12:18:52.616486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.714 [2024-07-15 12:18:52.616493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.714 [2024-07-15 12:18:52.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.714 [2024-07-15 12:18:52.616506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.714 [2024-07-15 12:18:52.616513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179c30 is same with the state(5) to be set 00:29:02.715 [2024-07-15 12:18:52.616555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc41610 is same with the state(5) to be set 00:29:02.715 [2024-07-15 12:18:52.616633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe9d0 is same with the state(5) to be set 00:29:02.715 [2024-07-15 12:18:52.616714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e7b10 is same with the state(5) to be set 00:29:02.715 [2024-07-15 12:18:52.616790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11583a0 is same with the state(5) to be set 00:29:02.715 [2024-07-15 12:18:52.616866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315210 is same with the state(5) to be set 00:29:02.715 [2024-07-15 12:18:52.616943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.616991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.616997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11496b0 is same with the state(5) to be set 00:29:02.715 [2024-07-15 12:18:52.617018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.617025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.617039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.617053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.617066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e78c0 is same with the state(5) to be set 00:29:02.715 [2024-07-15 12:18:52.617096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.617106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.617120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.617133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.715 [2024-07-15 12:18:52.617146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130ba20 is same with the state(5) to be set 00:29:02.715 [2024-07-15 12:18:52.617641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.715 [2024-07-15 12:18:52.617662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.715 [2024-07-15 12:18:52.617684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.715 [2024-07-15 12:18:52.617699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.715 [2024-07-15 12:18:52.617714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.715 [2024-07-15 12:18:52.617728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.715 [2024-07-15 12:18:52.617737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.617989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.617999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.716 [2024-07-15 12:18:52.618382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.716 [2024-07-15 12:18:52.618391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.618623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.618631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b81b0 is same with the state(5) to be set 00:29:02.717 [2024-07-15 12:18:52.618686] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12b81b0 was disconnected and freed. reset controller. 00:29:02.717 [2024-07-15 12:18:52.635643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd35d70 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.635698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1179c30 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.635715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc41610 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.635730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fe9d0 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.635742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e7b10 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.635757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11583a0 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.635772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1315210 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.635785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11496b0 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.635798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e78c0 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.635813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130ba20 (9): Bad file descriptor 00:29:02.717 [2024-07-15 12:18:52.637288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.717 [2024-07-15 12:18:52.637532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.717 [2024-07-15 12:18:52.637540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.637988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.637994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.718 [2024-07-15 12:18:52.638165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.718 [2024-07-15 12:18:52.638171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.638180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.638187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.638195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.638202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.638209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.638216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.638228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.638235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.638243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.638249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.638257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.638264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.638272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b9670 is same with the state(5) to be set 00:29:02.719 [2024-07-15 12:18:52.639407] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12b9670 was disconnected and freed. reset controller. 00:29:02.719 [2024-07-15 12:18:52.640729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:02.719 [2024-07-15 12:18:52.641386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:02.719 [2024-07-15 12:18:52.641549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.719 [2024-07-15 12:18:52.641566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130ba20 with addr=10.0.0.2, port=4420 00:29:02.719 [2024-07-15 12:18:52.641575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130ba20 is same with the state(5) to be set 00:29:02.719 [2024-07-15 12:18:52.641620] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.719 [2024-07-15 12:18:52.641666] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.719 [2024-07-15 12:18:52.641711] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.719 [2024-07-15 12:18:52.641754] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.719 [2024-07-15 12:18:52.641799] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.719 [2024-07-15 12:18:52.641872] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.719 [2024-07-15 12:18:52.642317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.719 [2024-07-15 12:18:52.642334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fe9d0 with addr=10.0.0.2, port=4420 00:29:02.719 [2024-07-15 12:18:52.642342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe9d0 is same with the state(5) to be set 00:29:02.719 [2024-07-15 12:18:52.642353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130ba20 (9): Bad file descriptor 00:29:02.719 [2024-07-15 12:18:52.642452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fe9d0 (9): Bad file descriptor 00:29:02.719 [2024-07-15 12:18:52.642463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:02.719 [2024-07-15 12:18:52.642469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:02.719 [2024-07-15 12:18:52.642477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:02.719 [2024-07-15 12:18:52.642529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.719 [2024-07-15 12:18:52.642938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.719 [2024-07-15 12:18:52.642944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.642952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.642959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.642967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.642973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.642981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.642989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.642997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11445a0 is same with the state(5) to be set 00:29:02.720 [2024-07-15 12:18:52.643540] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11445a0 was disconnected and freed. reset controller. 00:29:02.720 [2024-07-15 12:18:52.643571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.720 [2024-07-15 12:18:52.643649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.720 [2024-07-15 12:18:52.643655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.643993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.643999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.721 [2024-07-15 12:18:52.644207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.721 [2024-07-15 12:18:52.644216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.644521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.644528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6f00 is same with the state(5) to be set 00:29:02.722 [2024-07-15 12:18:52.644579] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12b6f00 was disconnected and freed. reset controller. 00:29:02.722 [2024-07-15 12:18:52.644597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.722 [2024-07-15 12:18:52.644614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:02.722 [2024-07-15 12:18:52.644622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:02.722 [2024-07-15 12:18:52.644631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:02.722 [2024-07-15 12:18:52.646509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.722 [2024-07-15 12:18:52.646522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:02.722 [2024-07-15 12:18:52.646532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:02.722 [2024-07-15 12:18:52.646842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.722 [2024-07-15 12:18:52.646858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc41610 with addr=10.0.0.2, port=4420 00:29:02.722 [2024-07-15 12:18:52.646865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc41610 is same with the state(5) to be set 00:29:02.722 [2024-07-15 12:18:52.647000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.722 [2024-07-15 12:18:52.647009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e78c0 with addr=10.0.0.2, port=4420 00:29:02.722 [2024-07-15 12:18:52.647015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e78c0 is same with the state(5) to be set 00:29:02.722 [2024-07-15 12:18:52.647059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.722 [2024-07-15 12:18:52.647296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.722 [2024-07-15 12:18:52.647302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.723 [2024-07-15 12:18:52.647914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.723 [2024-07-15 12:18:52.647923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.647930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.647938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.647944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.647952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.647959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.647967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.647973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.647981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.647988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.647996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.648002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.648009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f20c0 is same with the state(5) to be set 00:29:02.724 [2024-07-15 12:18:52.649028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.724 [2024-07-15 12:18:52.649565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.724 [2024-07-15 12:18:52.649571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.649974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.649981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f3340 is same with the state(5) to be set 00:29:02.725 [2024-07-15 12:18:52.650984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.650995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.651005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.651012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.651020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.651027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.651036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.651042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.651050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.651059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.651067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.651073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.651081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.651088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.651095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.651102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.651110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.651117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.725 [2024-07-15 12:18:52.651125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.725 [2024-07-15 12:18:52.651132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.726 [2024-07-15 12:18:52.651745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.726 [2024-07-15 12:18:52.651752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.651759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.651767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.651773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.651781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.651789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.651797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.651803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.651812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.651818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.651826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.651833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.651841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.651847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.656110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.656124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.656133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.656140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.656148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.656154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.656162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.656169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.656178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.656184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.656191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f47d0 is same with the state(5) to be set 00:29:02.727 [2024-07-15 12:18:52.657208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.727 [2024-07-15 12:18:52.657665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.727 [2024-07-15 12:18:52.657672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.657985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.657992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.658153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.658159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5cf0 is same with the state(5) to be set 00:29:02.728 [2024-07-15 12:18:52.659163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.659175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.659187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.659193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.659201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.659208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.659216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.659223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.659234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.659241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.659249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.659255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.659263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.659270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.659278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.659285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.659292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.728 [2024-07-15 12:18:52.659299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.728 [2024-07-15 12:18:52.659307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.729 [2024-07-15 12:18:52.659845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.729 [2024-07-15 12:18:52.659852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.659867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.659881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.659897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.659911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.659927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.659942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.659957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.659971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.659986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.659994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.660000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.660008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.660015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.660022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.660029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.660037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.660043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.660051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.660058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.660066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.660073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.660081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.660087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.660095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.660103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.660110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143130 is same with the state(5) to be set 00:29:02.730 [2024-07-15 12:18:52.661313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.730 [2024-07-15 12:18:52.661682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.730 [2024-07-15 12:18:52.661689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.661992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.661999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.731 [2024-07-15 12:18:52.662268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.731 [2024-07-15 12:18:52.662276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1145a10 is same with the state(5) to be set 00:29:02.731 [2024-07-15 12:18:52.663716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.731 [2024-07-15 12:18:52.663735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:02.731 [2024-07-15 12:18:52.663745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:02.731 [2024-07-15 12:18:52.663753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:02.731 [2024-07-15 12:18:52.663793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc41610 (9): Bad file descriptor 00:29:02.731 [2024-07-15 12:18:52.663804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e78c0 (9): Bad file descriptor 00:29:02.731 [2024-07-15 12:18:52.663846] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:02.731 [2024-07-15 12:18:52.663857] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:02.732 [2024-07-15 12:18:52.663870] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:02.732 [2024-07-15 12:18:52.663879] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:02.732 [2024-07-15 12:18:52.664176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:02.732 task offset: 24576 on job bdev=Nvme9n1 fails 00:29:02.732 00:29:02.732 Latency(us) 00:29:02.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.732 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme1n1 ended in about 0.91 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme1n1 : 0.91 210.02 13.13 70.01 0.00 226247.46 16640.45 218833.25 00:29:02.732 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme2n1 ended in about 0.92 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme2n1 : 0.92 209.57 13.10 69.86 0.00 222777.88 15956.59 217921.45 00:29:02.732 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme3n1 ended in about 0.92 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme3n1 : 0.92 213.58 13.35 69.39 0.00 216140.30 6838.54 213362.42 00:29:02.732 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme4n1 ended in about 0.92 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme4n1 : 0.92 207.72 12.98 69.24 0.00 216943.08 15044.79 214274.23 00:29:02.732 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme5n1 ended in about 0.93 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme5n1 : 0.93 211.60 13.22 69.09 0.00 210133.40 7693.36 216097.84 00:29:02.732 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme6n1 ended in about 0.91 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme6n1 : 0.91 210.79 13.17 70.26 0.00 205608.74 18578.03 218833.25 00:29:02.732 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme7n1 ended in about 0.93 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme7n1 : 0.93 212.19 13.26 68.93 0.00 202137.24 14132.98 229774.91 00:29:02.732 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme8n1 ended in about 0.91 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme8n1 : 0.91 214.96 13.44 70.19 0.00 194905.93 6268.66 219745.06 00:29:02.732 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme9n1 ended in about 0.90 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme9n1 : 0.90 212.73 13.30 70.91 0.00 191744.22 20287.67 219745.06 00:29:02.732 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.732 Job: Nvme10n1 ended in about 0.91 seconds with error 00:29:02.732 Verification LBA range: start 0x0 length 0x400 00:29:02.732 Nvme10n1 : 0.91 211.92 13.25 70.64 0.00 188763.99 4559.03 237069.36 00:29:02.732 =================================================================================================================== 00:29:02.732 Total : 2115.07 132.19 698.52 0.00 207530.85 4559.03 237069.36 00:29:02.732 [2024-07-15 12:18:52.685280] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:02.732 [2024-07-15 12:18:52.685314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:02.732 [2024-07-15 12:18:52.685581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 12:18:52.685599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11496b0 with addr=10.0.0.2, port=4420 00:29:02.732 [2024-07-15 12:18:52.685609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11496b0 is same with the state(5) to be set 00:29:02.732 [2024-07-15 12:18:52.685803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 12:18:52.685813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1315210 with addr=10.0.0.2, port=4420 00:29:02.732 [2024-07-15 12:18:52.685820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1315210 is same with the state(5) to be set 00:29:02.732 [2024-07-15 12:18:52.686050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 12:18:52.686060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd35d70 with addr=10.0.0.2, port=4420 00:29:02.732 [2024-07-15 12:18:52.686067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd35d70 is same with the state(5) to be set 00:29:02.732 [2024-07-15 12:18:52.686279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 12:18:52.686289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11583a0 with addr=10.0.0.2, port=4420 00:29:02.732 [2024-07-15 12:18:52.686296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11583a0 is same with the state(5) to be set 00:29:02.732 [2024-07-15 12:18:52.686303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:02.732 [2024-07-15 12:18:52.686309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:02.732 [2024-07-15 12:18:52.686317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:02.732 [2024-07-15 12:18:52.686332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:02.732 [2024-07-15 12:18:52.686343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:02.732 [2024-07-15 12:18:52.686349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:02.732 [2024-07-15 12:18:52.687671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:02.732 [2024-07-15 12:18:52.687686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:02.732 [2024-07-15 12:18:52.687694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.732 [2024-07-15 12:18:52.687701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.732 [2024-07-15 12:18:52.688000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 12:18:52.688013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1179c30 with addr=10.0.0.2, port=4420 00:29:02.732 [2024-07-15 12:18:52.688020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179c30 is same with the state(5) to be set 00:29:02.732 [2024-07-15 12:18:52.688261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 12:18:52.688284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e7b10 with addr=10.0.0.2, port=4420 00:29:02.732 [2024-07-15 12:18:52.688292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e7b10 is same with the state(5) to be set 00:29:02.732 [2024-07-15 12:18:52.688304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11496b0 (9): Bad file descriptor 00:29:02.732 [2024-07-15 12:18:52.688315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1315210 (9): Bad file descriptor 00:29:02.732 [2024-07-15 12:18:52.688323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd35d70 (9): Bad file descriptor 00:29:02.732 [2024-07-15 12:18:52.688333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11583a0 (9): Bad file descriptor 00:29:02.732 [2024-07-15 12:18:52.688374] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:02.732 [2024-07-15 12:18:52.688388] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:02.732 [2024-07-15 12:18:52.688397] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:02.732 [2024-07-15 12:18:52.688406] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:02.732 [2024-07-15 12:18:52.688575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 12:18:52.688588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130ba20 with addr=10.0.0.2, port=4420 00:29:02.732 [2024-07-15 12:18:52.688595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130ba20 is same with the state(5) to be set 00:29:02.732 [2024-07-15 12:18:52.688843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.732 [2024-07-15 12:18:52.688854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fe9d0 with addr=10.0.0.2, port=4420 00:29:02.732 [2024-07-15 12:18:52.688860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe9d0 is same with the state(5) to be set 00:29:02.732 [2024-07-15 12:18:52.688869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1179c30 (9): Bad file descriptor 00:29:02.732 [2024-07-15 12:18:52.688877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e7b10 (9): Bad file descriptor 00:29:02.732 [2024-07-15 12:18:52.688885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.732 [2024-07-15 12:18:52.688895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.732 [2024-07-15 12:18:52.688902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.732 [2024-07-15 12:18:52.688912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:02.732 [2024-07-15 12:18:52.688917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:02.732 [2024-07-15 12:18:52.688923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:02.732 [2024-07-15 12:18:52.688932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:02.732 [2024-07-15 12:18:52.688938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:02.732 [2024-07-15 12:18:52.688944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:02.732 [2024-07-15 12:18:52.688953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:02.732 [2024-07-15 12:18:52.688958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:02.732 [2024-07-15 12:18:52.688965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:02.732 [2024-07-15 12:18:52.689027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:02.732 [2024-07-15 12:18:52.689037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:02.732 [2024-07-15 12:18:52.689045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.732 [2024-07-15 12:18:52.689051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.732 [2024-07-15 12:18:52.689056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.732 [2024-07-15 12:18:52.689062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.732 [2024-07-15 12:18:52.689079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130ba20 (9): Bad file descriptor 00:29:02.732 [2024-07-15 12:18:52.689087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fe9d0 (9): Bad file descriptor 00:29:02.733 [2024-07-15 12:18:52.689094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:02.733 [2024-07-15 12:18:52.689100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:02.733 [2024-07-15 12:18:52.689106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:02.733 [2024-07-15 12:18:52.689115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:02.733 [2024-07-15 12:18:52.689121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:02.733 [2024-07-15 12:18:52.689127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:02.733 [2024-07-15 12:18:52.689152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.733 [2024-07-15 12:18:52.689158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.733 [2024-07-15 12:18:52.689272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 12:18:52.689283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e78c0 with addr=10.0.0.2, port=4420 00:29:02.733 [2024-07-15 12:18:52.689290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e78c0 is same with the state(5) to be set 00:29:02.733 [2024-07-15 12:18:52.689443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.733 [2024-07-15 12:18:52.689456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc41610 with addr=10.0.0.2, port=4420 00:29:02.733 [2024-07-15 12:18:52.689463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc41610 is same with the state(5) to be set 00:29:02.733 [2024-07-15 12:18:52.689470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:02.733 [2024-07-15 12:18:52.689475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:02.733 [2024-07-15 12:18:52.689482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:02.733 [2024-07-15 12:18:52.689490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:02.733 [2024-07-15 12:18:52.689495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:02.733 [2024-07-15 12:18:52.689501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:02.733 [2024-07-15 12:18:52.689795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.733 [2024-07-15 12:18:52.689804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.733 [2024-07-15 12:18:52.689813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e78c0 (9): Bad file descriptor 00:29:02.733 [2024-07-15 12:18:52.689823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc41610 (9): Bad file descriptor 00:29:02.733 [2024-07-15 12:18:52.689847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:02.733 [2024-07-15 12:18:52.689854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:02.733 [2024-07-15 12:18:52.689860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:02.733 [2024-07-15 12:18:52.689869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:02.733 [2024-07-15 12:18:52.689874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:02.733 [2024-07-15 12:18:52.689880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:02.733 [2024-07-15 12:18:52.689903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.733 [2024-07-15 12:18:52.689910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.299 12:18:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:29:03.299 12:18:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1260236 00:29:04.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1260236) - No such process 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:04.233 rmmod nvme_tcp 00:29:04.233 rmmod nvme_fabrics 00:29:04.233 rmmod nvme_keyring 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.233 12:18:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.760 12:18:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:06.760 00:29:06.760 real 0m7.706s 00:29:06.760 user 0m18.723s 00:29:06.760 sys 0m1.357s 00:29:06.760 12:18:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:06.760 12:18:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:06.760 ************************************ 00:29:06.760 END TEST nvmf_shutdown_tc3 00:29:06.760 ************************************ 00:29:06.761 12:18:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:06.761 12:18:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:06.761 00:29:06.761 real 0m31.004s 00:29:06.761 user 1m16.178s 00:29:06.761 sys 0m8.461s 00:29:06.761 12:18:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:06.761 12:18:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:06.761 ************************************ 00:29:06.761 END TEST nvmf_shutdown 00:29:06.761 ************************************ 00:29:06.761 12:18:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:06.761 12:18:56 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:29:06.761 12:18:56 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:06.761 12:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.761 12:18:56 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:29:06.761 12:18:56 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:06.761 12:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.761 12:18:56 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:29:06.761 12:18:56 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:06.761 12:18:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:06.761 12:18:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.761 12:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.761 ************************************ 00:29:06.761 START TEST nvmf_multicontroller 00:29:06.761 ************************************ 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:06.761 * Looking for test storage... 00:29:06.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:29:06.761 12:18:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:12.032 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:12.032 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:12.032 Found net devices under 0000:86:00.0: cvl_0_0 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:12.032 Found net devices under 0000:86:00.1: cvl_0_1 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.032 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.033 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:12.033 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:12.033 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.033 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.033 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.033 12:19:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.033 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:12.033 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:12.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:29:12.291 00:29:12.291 --- 10.0.0.2 ping statistics --- 00:29:12.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.291 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:29:12.291 00:29:12.291 --- 10.0.0.1 ping statistics --- 00:29:12.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.291 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1264280 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1264280 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1264280 ']' 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:12.291 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.291 [2024-07-15 12:19:02.220161] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:29:12.291 [2024-07-15 12:19:02.220212] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.291 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.291 [2024-07-15 12:19:02.290290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:12.550 [2024-07-15 12:19:02.331751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.550 [2024-07-15 12:19:02.331790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.550 [2024-07-15 12:19:02.331798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.550 [2024-07-15 12:19:02.331804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.550 [2024-07-15 12:19:02.331809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.550 [2024-07-15 12:19:02.331923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.550 [2024-07-15 12:19:02.332046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.550 [2024-07-15 12:19:02.332047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.550 [2024-07-15 12:19:02.461943] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.550 Malloc0 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.550 [2024-07-15 12:19:02.525493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.550 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:12.551 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.551 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.551 [2024-07-15 12:19:02.533445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:12.551 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.551 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:12.551 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.551 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.809 Malloc1 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1264449 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1264449 /var/tmp/bdevperf.sock 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1264449 ']' 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:12.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:12.809 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.068 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.068 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:29:13.068 12:19:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:13.068 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.068 12:19:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.068 NVMe0n1 00:29:13.068 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.068 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:13.068 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:13.068 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.068 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.327 1 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.327 request: 00:29:13.327 { 00:29:13.327 "name": "NVMe0", 00:29:13.327 "trtype": "tcp", 00:29:13.327 "traddr": "10.0.0.2", 00:29:13.327 "adrfam": "ipv4", 00:29:13.327 "trsvcid": "4420", 00:29:13.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.327 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:13.327 "hostaddr": "10.0.0.2", 00:29:13.327 "hostsvcid": "60000", 00:29:13.327 "prchk_reftag": false, 00:29:13.327 "prchk_guard": false, 00:29:13.327 "hdgst": false, 00:29:13.327 "ddgst": false, 00:29:13.327 "method": "bdev_nvme_attach_controller", 00:29:13.327 "req_id": 1 00:29:13.327 } 00:29:13.327 Got JSON-RPC error response 00:29:13.327 response: 00:29:13.327 { 00:29:13.327 "code": -114, 00:29:13.327 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:13.327 } 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.327 request: 00:29:13.327 { 00:29:13.327 "name": "NVMe0", 00:29:13.327 "trtype": "tcp", 00:29:13.327 "traddr": "10.0.0.2", 00:29:13.327 "adrfam": "ipv4", 00:29:13.327 "trsvcid": "4420", 00:29:13.327 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:13.327 "hostaddr": "10.0.0.2", 00:29:13.327 "hostsvcid": "60000", 00:29:13.327 "prchk_reftag": false, 00:29:13.327 "prchk_guard": false, 00:29:13.327 "hdgst": false, 00:29:13.327 "ddgst": false, 00:29:13.327 "method": "bdev_nvme_attach_controller", 00:29:13.327 "req_id": 1 00:29:13.327 } 00:29:13.327 Got JSON-RPC error response 00:29:13.327 response: 00:29:13.327 { 00:29:13.327 "code": -114, 00:29:13.327 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:13.327 } 00:29:13.327 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.328 request: 00:29:13.328 { 00:29:13.328 "name": "NVMe0", 00:29:13.328 "trtype": "tcp", 00:29:13.328 "traddr": "10.0.0.2", 00:29:13.328 "adrfam": "ipv4", 00:29:13.328 "trsvcid": "4420", 00:29:13.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.328 "hostaddr": "10.0.0.2", 00:29:13.328 "hostsvcid": "60000", 00:29:13.328 "prchk_reftag": false, 00:29:13.328 "prchk_guard": false, 00:29:13.328 "hdgst": false, 00:29:13.328 "ddgst": false, 00:29:13.328 "multipath": "disable", 00:29:13.328 "method": "bdev_nvme_attach_controller", 00:29:13.328 "req_id": 1 00:29:13.328 } 00:29:13.328 Got JSON-RPC error response 00:29:13.328 response: 00:29:13.328 { 00:29:13.328 "code": -114, 00:29:13.328 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:13.328 } 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.328 request: 00:29:13.328 { 00:29:13.328 "name": "NVMe0", 00:29:13.328 "trtype": "tcp", 00:29:13.328 "traddr": "10.0.0.2", 00:29:13.328 "adrfam": "ipv4", 00:29:13.328 "trsvcid": "4420", 00:29:13.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.328 "hostaddr": "10.0.0.2", 00:29:13.328 "hostsvcid": "60000", 00:29:13.328 "prchk_reftag": false, 00:29:13.328 "prchk_guard": false, 00:29:13.328 "hdgst": false, 00:29:13.328 "ddgst": false, 00:29:13.328 "multipath": "failover", 00:29:13.328 "method": "bdev_nvme_attach_controller", 00:29:13.328 "req_id": 1 00:29:13.328 } 00:29:13.328 Got JSON-RPC error response 00:29:13.328 response: 00:29:13.328 { 00:29:13.328 "code": -114, 00:29:13.328 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:13.328 } 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.328 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.586 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.586 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:13.586 12:19:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:14.962 0 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1264449 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1264449 ']' 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1264449 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1264449 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1264449' 00:29:14.962 killing process with pid 1264449 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1264449 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1264449 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:29:14.962 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:14.962 [2024-07-15 12:19:02.632031] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:29:14.962 [2024-07-15 12:19:02.632082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264449 ] 00:29:14.962 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.962 [2024-07-15 12:19:02.701183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.962 [2024-07-15 12:19:02.742469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.962 [2024-07-15 12:19:03.468489] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name b5c63826-9cc9-4271-9529-2d4fdbb0cfc2 already exists 00:29:14.962 [2024-07-15 12:19:03.468517] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:b5c63826-9cc9-4271-9529-2d4fdbb0cfc2 alias for bdev NVMe1n1 00:29:14.962 [2024-07-15 12:19:03.468525] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:14.962 Running I/O for 1 seconds... 00:29:14.962 00:29:14.962 Latency(us) 00:29:14.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.962 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:14.962 NVMe0n1 : 1.01 23290.30 90.98 0.00 0.00 5478.04 4872.46 10086.85 00:29:14.962 =================================================================================================================== 00:29:14.962 Total : 23290.30 90.98 0.00 0.00 5478.04 4872.46 10086.85 00:29:14.962 Received shutdown signal, test time was about 1.000000 seconds 00:29:14.962 00:29:14.962 Latency(us) 00:29:14.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.962 =================================================================================================================== 00:29:14.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.962 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:29:14.962 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:14.963 rmmod nvme_tcp 00:29:14.963 rmmod nvme_fabrics 00:29:14.963 rmmod nvme_keyring 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1264280 ']' 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1264280 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1264280 ']' 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1264280 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:14.963 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1264280 00:29:15.223 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:15.223 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:15.223 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1264280' 00:29:15.223 killing process with pid 1264280 00:29:15.223 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1264280 00:29:15.223 12:19:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1264280 00:29:15.223 12:19:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:15.223 12:19:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:15.223 12:19:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:15.223 12:19:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:15.223 12:19:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:15.223 12:19:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.223 12:19:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.223 12:19:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.822 12:19:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:17.822 00:29:17.822 real 0m10.946s 00:29:17.822 user 0m12.369s 00:29:17.822 sys 0m5.058s 00:29:17.822 12:19:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:17.822 12:19:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:17.822 ************************************ 00:29:17.822 END TEST nvmf_multicontroller 00:29:17.822 ************************************ 00:29:17.822 12:19:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:17.822 12:19:07 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:17.822 12:19:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:17.822 12:19:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.822 12:19:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:17.822 ************************************ 00:29:17.822 START TEST nvmf_aer 00:29:17.822 ************************************ 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:17.822 * Looking for test storage... 00:29:17.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:17.822 12:19:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:23.092 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.092 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:23.093 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:23.093 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:23.093 Found net devices under 0000:86:00.0: cvl_0_0 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:23.093 Found net devices under 0000:86:00.1: cvl_0_1 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.093 12:19:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.093 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.093 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.093 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:23.093 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:23.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:29:23.354 00:29:23.354 --- 10.0.0.2 ping statistics --- 00:29:23.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.354 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:23.354 00:29:23.354 --- 10.0.0.1 ping statistics --- 00:29:23.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.354 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1268282 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1268282 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1268282 ']' 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.354 12:19:13 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:23.354 [2024-07-15 12:19:13.252762] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:29:23.354 [2024-07-15 12:19:13.252807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.354 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.354 [2024-07-15 12:19:13.327055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.613 [2024-07-15 12:19:13.369400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.613 [2024-07-15 12:19:13.369436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.613 [2024-07-15 12:19:13.369443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.613 [2024-07-15 12:19:13.369449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.613 [2024-07-15 12:19:13.369454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.613 [2024-07-15 12:19:13.369499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.613 [2024-07-15 12:19:13.369609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.613 [2024-07-15 12:19:13.369625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.613 [2024-07-15 12:19:13.369630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.181 [2024-07-15 12:19:14.127399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.181 Malloc0 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.181 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.182 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.182 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.182 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.182 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.182 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.182 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.182 [2024-07-15 12:19:14.178909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.440 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.440 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:24.440 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.440 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.440 [ 00:29:24.440 { 00:29:24.440 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:24.440 "subtype": "Discovery", 00:29:24.440 "listen_addresses": [], 00:29:24.440 "allow_any_host": true, 00:29:24.440 "hosts": [] 00:29:24.440 }, 00:29:24.440 { 00:29:24.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.441 "subtype": "NVMe", 00:29:24.441 "listen_addresses": [ 00:29:24.441 { 00:29:24.441 "trtype": "TCP", 00:29:24.441 "adrfam": "IPv4", 00:29:24.441 "traddr": "10.0.0.2", 00:29:24.441 "trsvcid": "4420" 00:29:24.441 } 00:29:24.441 ], 00:29:24.441 "allow_any_host": true, 00:29:24.441 "hosts": [], 00:29:24.441 "serial_number": "SPDK00000000000001", 00:29:24.441 "model_number": "SPDK bdev Controller", 00:29:24.441 "max_namespaces": 2, 00:29:24.441 "min_cntlid": 1, 00:29:24.441 "max_cntlid": 65519, 00:29:24.441 "namespaces": [ 00:29:24.441 { 00:29:24.441 "nsid": 1, 00:29:24.441 "bdev_name": "Malloc0", 00:29:24.441 "name": "Malloc0", 00:29:24.441 "nguid": "3F7357E9B7824E1AA87F4BFA726C40C5", 00:29:24.441 "uuid": "3f7357e9-b782-4e1a-a87f-4bfa726c40c5" 00:29:24.441 } 00:29:24.441 ] 00:29:24.441 } 00:29:24.441 ] 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1268450 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:24.441 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:24.441 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.700 Malloc1 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.700 Asynchronous Event Request test 00:29:24.700 Attaching to 10.0.0.2 00:29:24.700 Attached to 10.0.0.2 00:29:24.700 Registering asynchronous event callbacks... 00:29:24.700 Starting namespace attribute notice tests for all controllers... 00:29:24.700 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:24.700 aer_cb - Changed Namespace 00:29:24.700 Cleaning up... 00:29:24.700 [ 00:29:24.700 { 00:29:24.700 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:24.700 "subtype": "Discovery", 00:29:24.700 "listen_addresses": [], 00:29:24.700 "allow_any_host": true, 00:29:24.700 "hosts": [] 00:29:24.700 }, 00:29:24.700 { 00:29:24.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.700 "subtype": "NVMe", 00:29:24.700 "listen_addresses": [ 00:29:24.700 { 00:29:24.700 "trtype": "TCP", 00:29:24.700 "adrfam": "IPv4", 00:29:24.700 "traddr": "10.0.0.2", 00:29:24.700 "trsvcid": "4420" 00:29:24.700 } 00:29:24.700 ], 00:29:24.700 "allow_any_host": true, 00:29:24.700 "hosts": [], 00:29:24.700 "serial_number": "SPDK00000000000001", 00:29:24.700 "model_number": "SPDK bdev Controller", 00:29:24.700 "max_namespaces": 2, 00:29:24.700 "min_cntlid": 1, 00:29:24.700 "max_cntlid": 65519, 00:29:24.700 "namespaces": [ 00:29:24.700 { 00:29:24.700 "nsid": 1, 00:29:24.700 "bdev_name": "Malloc0", 00:29:24.700 "name": "Malloc0", 00:29:24.700 "nguid": "3F7357E9B7824E1AA87F4BFA726C40C5", 00:29:24.700 "uuid": "3f7357e9-b782-4e1a-a87f-4bfa726c40c5" 00:29:24.700 }, 00:29:24.700 { 00:29:24.700 "nsid": 2, 00:29:24.700 "bdev_name": "Malloc1", 00:29:24.700 "name": "Malloc1", 00:29:24.700 "nguid": "D25D677AC0F94E91B9C0BB7E6E4FC2D0", 00:29:24.700 "uuid": "d25d677a-c0f9-4e91-b9c0-bb7e6e4fc2d0" 00:29:24.700 } 00:29:24.700 ] 00:29:24.700 } 00:29:24.700 ] 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1268450 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.700 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:24.701 rmmod nvme_tcp 00:29:24.701 rmmod nvme_fabrics 00:29:24.701 rmmod nvme_keyring 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1268282 ']' 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1268282 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1268282 ']' 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1268282 00:29:24.701 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1268282 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1268282' 00:29:24.960 killing process with pid 1268282 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1268282 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1268282 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:24.960 12:19:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.496 12:19:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:27.496 00:29:27.496 real 0m9.638s 00:29:27.496 user 0m7.788s 00:29:27.496 sys 0m4.798s 00:29:27.496 12:19:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.496 12:19:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:27.496 ************************************ 00:29:27.496 END TEST nvmf_aer 00:29:27.496 ************************************ 00:29:27.496 12:19:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:27.496 12:19:17 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:27.496 12:19:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:27.496 12:19:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.496 12:19:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:27.496 ************************************ 00:29:27.496 START TEST nvmf_async_init 00:29:27.496 ************************************ 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:27.496 * Looking for test storage... 00:29:27.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9c2e1facc8384e5ebaae37d4c32ec146 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:27.496 12:19:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:32.769 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:32.769 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:32.769 Found net devices under 0000:86:00.0: cvl_0_0 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:32.769 Found net devices under 0000:86:00.1: cvl_0_1 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.769 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:33.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:29:33.037 00:29:33.037 --- 10.0.0.2 ping statistics --- 00:29:33.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.037 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:29:33.037 00:29:33.037 --- 10.0.0.1 ping statistics --- 00:29:33.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.037 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1271950 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1271950 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1271950 ']' 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.037 12:19:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.037 [2024-07-15 12:19:22.982783] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:29:33.037 [2024-07-15 12:19:22.982827] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.037 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.301 [2024-07-15 12:19:23.053800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.301 [2024-07-15 12:19:23.093540] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.301 [2024-07-15 12:19:23.093579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.301 [2024-07-15 12:19:23.093586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.301 [2024-07-15 12:19:23.093592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.301 [2024-07-15 12:19:23.093598] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.301 [2024-07-15 12:19:23.093632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 [2024-07-15 12:19:23.225826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 null0 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9c2e1facc8384e5ebaae37d4c32ec146 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 [2024-07-15 12:19:23.266035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.301 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.561 nvme0n1 00:29:33.561 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.561 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:33.561 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.561 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.561 [ 00:29:33.561 { 00:29:33.561 "name": "nvme0n1", 00:29:33.561 "aliases": [ 00:29:33.561 "9c2e1fac-c838-4e5e-baae-37d4c32ec146" 00:29:33.561 ], 00:29:33.561 "product_name": "NVMe disk", 00:29:33.561 "block_size": 512, 00:29:33.561 "num_blocks": 2097152, 00:29:33.561 "uuid": "9c2e1fac-c838-4e5e-baae-37d4c32ec146", 00:29:33.561 "assigned_rate_limits": { 00:29:33.561 "rw_ios_per_sec": 0, 00:29:33.561 "rw_mbytes_per_sec": 0, 00:29:33.561 "r_mbytes_per_sec": 0, 00:29:33.561 "w_mbytes_per_sec": 0 00:29:33.561 }, 00:29:33.561 "claimed": false, 00:29:33.561 "zoned": false, 00:29:33.561 "supported_io_types": { 00:29:33.561 "read": true, 00:29:33.561 "write": true, 00:29:33.561 "unmap": false, 00:29:33.561 "flush": true, 00:29:33.561 "reset": true, 00:29:33.561 "nvme_admin": true, 00:29:33.561 "nvme_io": true, 00:29:33.561 "nvme_io_md": false, 00:29:33.561 "write_zeroes": true, 00:29:33.561 "zcopy": false, 00:29:33.561 "get_zone_info": false, 00:29:33.561 "zone_management": false, 00:29:33.561 "zone_append": false, 00:29:33.561 "compare": true, 00:29:33.561 "compare_and_write": true, 00:29:33.561 "abort": true, 00:29:33.561 "seek_hole": false, 00:29:33.561 "seek_data": false, 00:29:33.561 "copy": true, 00:29:33.561 "nvme_iov_md": false 00:29:33.561 }, 00:29:33.561 "memory_domains": [ 00:29:33.561 { 00:29:33.561 "dma_device_id": "system", 00:29:33.561 "dma_device_type": 1 00:29:33.561 } 00:29:33.561 ], 00:29:33.561 "driver_specific": { 00:29:33.561 "nvme": [ 00:29:33.561 { 00:29:33.561 "trid": { 00:29:33.561 "trtype": "TCP", 00:29:33.561 "adrfam": "IPv4", 00:29:33.561 "traddr": "10.0.0.2", 00:29:33.561 "trsvcid": "4420", 00:29:33.561 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:33.561 }, 00:29:33.561 "ctrlr_data": { 00:29:33.561 "cntlid": 1, 00:29:33.561 "vendor_id": "0x8086", 00:29:33.561 "model_number": "SPDK bdev Controller", 00:29:33.561 "serial_number": "00000000000000000000", 00:29:33.561 "firmware_revision": "24.09", 00:29:33.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:33.561 "oacs": { 00:29:33.561 "security": 0, 00:29:33.561 "format": 0, 00:29:33.561 "firmware": 0, 00:29:33.561 "ns_manage": 0 00:29:33.561 }, 00:29:33.561 "multi_ctrlr": true, 00:29:33.561 "ana_reporting": false 00:29:33.561 }, 00:29:33.561 "vs": { 00:29:33.561 "nvme_version": "1.3" 00:29:33.561 }, 00:29:33.561 "ns_data": { 00:29:33.561 "id": 1, 00:29:33.561 "can_share": true 00:29:33.561 } 00:29:33.561 } 00:29:33.561 ], 00:29:33.561 "mp_policy": "active_passive" 00:29:33.561 } 00:29:33.561 } 00:29:33.561 ] 00:29:33.561 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.561 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:33.561 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.561 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.561 [2024-07-15 12:19:23.522567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:33.561 [2024-07-15 12:19:23.522636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cece0 (9): Bad file descriptor 00:29:33.820 [2024-07-15 12:19:23.654319] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.820 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.820 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:33.820 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.820 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.820 [ 00:29:33.820 { 00:29:33.820 "name": "nvme0n1", 00:29:33.820 "aliases": [ 00:29:33.820 "9c2e1fac-c838-4e5e-baae-37d4c32ec146" 00:29:33.820 ], 00:29:33.820 "product_name": "NVMe disk", 00:29:33.820 "block_size": 512, 00:29:33.820 "num_blocks": 2097152, 00:29:33.820 "uuid": "9c2e1fac-c838-4e5e-baae-37d4c32ec146", 00:29:33.820 "assigned_rate_limits": { 00:29:33.820 "rw_ios_per_sec": 0, 00:29:33.820 "rw_mbytes_per_sec": 0, 00:29:33.820 "r_mbytes_per_sec": 0, 00:29:33.820 "w_mbytes_per_sec": 0 00:29:33.820 }, 00:29:33.820 "claimed": false, 00:29:33.820 "zoned": false, 00:29:33.820 "supported_io_types": { 00:29:33.820 "read": true, 00:29:33.820 "write": true, 00:29:33.820 "unmap": false, 00:29:33.820 "flush": true, 00:29:33.820 "reset": true, 00:29:33.820 "nvme_admin": true, 00:29:33.820 "nvme_io": true, 00:29:33.820 "nvme_io_md": false, 00:29:33.820 "write_zeroes": true, 00:29:33.820 "zcopy": false, 00:29:33.820 "get_zone_info": false, 00:29:33.820 "zone_management": false, 00:29:33.820 "zone_append": false, 00:29:33.820 "compare": true, 00:29:33.820 "compare_and_write": true, 00:29:33.820 "abort": true, 00:29:33.820 "seek_hole": false, 00:29:33.820 "seek_data": false, 00:29:33.820 "copy": true, 00:29:33.820 "nvme_iov_md": false 00:29:33.820 }, 00:29:33.820 "memory_domains": [ 00:29:33.820 { 00:29:33.820 "dma_device_id": "system", 00:29:33.820 "dma_device_type": 1 00:29:33.820 } 00:29:33.820 ], 00:29:33.820 "driver_specific": { 00:29:33.820 "nvme": [ 00:29:33.820 { 00:29:33.820 "trid": { 00:29:33.820 "trtype": "TCP", 00:29:33.820 "adrfam": "IPv4", 00:29:33.820 "traddr": "10.0.0.2", 00:29:33.820 "trsvcid": "4420", 00:29:33.820 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:33.820 }, 00:29:33.820 "ctrlr_data": { 00:29:33.820 "cntlid": 2, 00:29:33.820 "vendor_id": "0x8086", 00:29:33.820 "model_number": "SPDK bdev Controller", 00:29:33.820 "serial_number": "00000000000000000000", 00:29:33.820 "firmware_revision": "24.09", 00:29:33.820 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:33.820 "oacs": { 00:29:33.820 "security": 0, 00:29:33.820 "format": 0, 00:29:33.820 "firmware": 0, 00:29:33.820 "ns_manage": 0 00:29:33.820 }, 00:29:33.820 "multi_ctrlr": true, 00:29:33.820 "ana_reporting": false 00:29:33.820 }, 00:29:33.820 "vs": { 00:29:33.820 "nvme_version": "1.3" 00:29:33.820 }, 00:29:33.820 "ns_data": { 00:29:33.820 "id": 1, 00:29:33.820 "can_share": true 00:29:33.820 } 00:29:33.820 } 00:29:33.820 ], 00:29:33.820 "mp_policy": "active_passive" 00:29:33.820 } 00:29:33.820 } 00:29:33.820 ] 00:29:33.820 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.820 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.820 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.820 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.QWdIjMUV48 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.QWdIjMUV48 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.821 [2024-07-15 12:19:23.719150] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:33.821 [2024-07-15 12:19:23.719273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QWdIjMUV48 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.821 [2024-07-15 12:19:23.727169] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QWdIjMUV48 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:33.821 [2024-07-15 12:19:23.739213] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:33.821 [2024-07-15 12:19:23.739249] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:33.821 nvme0n1 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.821 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:34.079 [ 00:29:34.079 { 00:29:34.079 "name": "nvme0n1", 00:29:34.079 "aliases": [ 00:29:34.079 "9c2e1fac-c838-4e5e-baae-37d4c32ec146" 00:29:34.079 ], 00:29:34.079 "product_name": "NVMe disk", 00:29:34.079 "block_size": 512, 00:29:34.079 "num_blocks": 2097152, 00:29:34.079 "uuid": "9c2e1fac-c838-4e5e-baae-37d4c32ec146", 00:29:34.079 "assigned_rate_limits": { 00:29:34.079 "rw_ios_per_sec": 0, 00:29:34.080 "rw_mbytes_per_sec": 0, 00:29:34.080 "r_mbytes_per_sec": 0, 00:29:34.080 "w_mbytes_per_sec": 0 00:29:34.080 }, 00:29:34.080 "claimed": false, 00:29:34.080 "zoned": false, 00:29:34.080 "supported_io_types": { 00:29:34.080 "read": true, 00:29:34.080 "write": true, 00:29:34.080 "unmap": false, 00:29:34.080 "flush": true, 00:29:34.080 "reset": true, 00:29:34.080 "nvme_admin": true, 00:29:34.080 "nvme_io": true, 00:29:34.080 "nvme_io_md": false, 00:29:34.080 "write_zeroes": true, 00:29:34.080 "zcopy": false, 00:29:34.080 "get_zone_info": false, 00:29:34.080 "zone_management": false, 00:29:34.080 "zone_append": false, 00:29:34.080 "compare": true, 00:29:34.080 "compare_and_write": true, 00:29:34.080 "abort": true, 00:29:34.080 "seek_hole": false, 00:29:34.080 "seek_data": false, 00:29:34.080 "copy": true, 00:29:34.080 "nvme_iov_md": false 00:29:34.080 }, 00:29:34.080 "memory_domains": [ 00:29:34.080 { 00:29:34.080 "dma_device_id": "system", 00:29:34.080 "dma_device_type": 1 00:29:34.080 } 00:29:34.080 ], 00:29:34.080 "driver_specific": { 00:29:34.080 "nvme": [ 00:29:34.080 { 00:29:34.080 "trid": { 00:29:34.080 "trtype": "TCP", 00:29:34.080 "adrfam": "IPv4", 00:29:34.080 "traddr": "10.0.0.2", 00:29:34.080 "trsvcid": "4421", 00:29:34.080 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:34.080 }, 00:29:34.080 "ctrlr_data": { 00:29:34.080 "cntlid": 3, 00:29:34.080 "vendor_id": "0x8086", 00:29:34.080 "model_number": "SPDK bdev Controller", 00:29:34.080 "serial_number": "00000000000000000000", 00:29:34.080 "firmware_revision": "24.09", 00:29:34.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:34.080 "oacs": { 00:29:34.080 "security": 0, 00:29:34.080 "format": 0, 00:29:34.080 "firmware": 0, 00:29:34.080 "ns_manage": 0 00:29:34.080 }, 00:29:34.080 "multi_ctrlr": true, 00:29:34.080 "ana_reporting": false 00:29:34.080 }, 00:29:34.080 "vs": { 00:29:34.080 "nvme_version": "1.3" 00:29:34.080 }, 00:29:34.080 "ns_data": { 00:29:34.080 "id": 1, 00:29:34.080 "can_share": true 00:29:34.080 } 00:29:34.080 } 00:29:34.080 ], 00:29:34.080 "mp_policy": "active_passive" 00:29:34.080 } 00:29:34.080 } 00:29:34.080 ] 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.QWdIjMUV48 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:34.080 rmmod nvme_tcp 00:29:34.080 rmmod nvme_fabrics 00:29:34.080 rmmod nvme_keyring 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1271950 ']' 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1271950 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1271950 ']' 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1271950 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1271950 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1271950' 00:29:34.080 killing process with pid 1271950 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1271950 00:29:34.080 [2024-07-15 12:19:23.963520] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:34.080 [2024-07-15 12:19:23.963541] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:34.080 12:19:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1271950 00:29:34.339 12:19:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:34.339 12:19:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:34.339 12:19:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:34.339 12:19:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:34.339 12:19:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:34.339 12:19:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.339 12:19:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:34.339 12:19:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.243 12:19:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:36.243 00:29:36.243 real 0m9.125s 00:29:36.243 user 0m2.851s 00:29:36.243 sys 0m4.674s 00:29:36.243 12:19:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.243 12:19:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.243 ************************************ 00:29:36.243 END TEST nvmf_async_init 00:29:36.243 ************************************ 00:29:36.243 12:19:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:36.243 12:19:26 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:36.243 12:19:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:36.243 12:19:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:36.243 12:19:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.501 ************************************ 00:29:36.501 START TEST dma 00:29:36.501 ************************************ 00:29:36.501 12:19:26 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:36.501 * Looking for test storage... 00:29:36.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:36.501 12:19:26 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.501 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.502 12:19:26 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.502 12:19:26 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.502 12:19:26 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.502 12:19:26 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.502 12:19:26 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.502 12:19:26 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.502 12:19:26 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:29:36.502 12:19:26 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:36.502 12:19:26 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:36.502 12:19:26 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:36.502 12:19:26 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:29:36.502 00:29:36.502 real 0m0.120s 00:29:36.502 user 0m0.051s 00:29:36.502 sys 0m0.077s 00:29:36.502 12:19:26 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.502 12:19:26 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:29:36.502 ************************************ 00:29:36.502 END TEST dma 00:29:36.502 ************************************ 00:29:36.502 12:19:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:36.502 12:19:26 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:36.502 12:19:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:36.502 12:19:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:36.502 12:19:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.502 ************************************ 00:29:36.502 START TEST nvmf_identify 00:29:36.502 ************************************ 00:29:36.502 12:19:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:36.761 * Looking for test storage... 00:29:36.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:36.762 12:19:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:42.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:42.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:42.069 Found net devices under 0000:86:00.0: cvl_0_0 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:42.069 Found net devices under 0000:86:00.1: cvl_0_1 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.069 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:42.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:29:42.328 00:29:42.328 --- 10.0.0.2 ping statistics --- 00:29:42.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.328 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:29:42.328 00:29:42.328 --- 10.0.0.1 ping statistics --- 00:29:42.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.328 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1275643 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1275643 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1275643 ']' 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:42.328 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.587 [2024-07-15 12:19:32.343021] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:29:42.587 [2024-07-15 12:19:32.343066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.587 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.587 [2024-07-15 12:19:32.415358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.587 [2024-07-15 12:19:32.458712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.587 [2024-07-15 12:19:32.458748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.587 [2024-07-15 12:19:32.458755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.587 [2024-07-15 12:19:32.458761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.587 [2024-07-15 12:19:32.458767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.587 [2024-07-15 12:19:32.458818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.587 [2024-07-15 12:19:32.458930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.587 [2024-07-15 12:19:32.459038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.587 [2024-07-15 12:19:32.459039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.587 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:42.587 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:29:42.587 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.587 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.587 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.587 [2024-07-15 12:19:32.561057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.587 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.587 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:42.587 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:42.587 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.848 Malloc0 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.848 [2024-07-15 12:19:32.649128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.848 [ 00:29:42.848 { 00:29:42.848 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:42.848 "subtype": "Discovery", 00:29:42.848 "listen_addresses": [ 00:29:42.848 { 00:29:42.848 "trtype": "TCP", 00:29:42.848 "adrfam": "IPv4", 00:29:42.848 "traddr": "10.0.0.2", 00:29:42.848 "trsvcid": "4420" 00:29:42.848 } 00:29:42.848 ], 00:29:42.848 "allow_any_host": true, 00:29:42.848 "hosts": [] 00:29:42.848 }, 00:29:42.848 { 00:29:42.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.848 "subtype": "NVMe", 00:29:42.848 "listen_addresses": [ 00:29:42.848 { 00:29:42.848 "trtype": "TCP", 00:29:42.848 "adrfam": "IPv4", 00:29:42.848 "traddr": "10.0.0.2", 00:29:42.848 "trsvcid": "4420" 00:29:42.848 } 00:29:42.848 ], 00:29:42.848 "allow_any_host": true, 00:29:42.848 "hosts": [], 00:29:42.848 "serial_number": "SPDK00000000000001", 00:29:42.848 "model_number": "SPDK bdev Controller", 00:29:42.848 "max_namespaces": 32, 00:29:42.848 "min_cntlid": 1, 00:29:42.848 "max_cntlid": 65519, 00:29:42.848 "namespaces": [ 00:29:42.848 { 00:29:42.848 "nsid": 1, 00:29:42.848 "bdev_name": "Malloc0", 00:29:42.848 "name": "Malloc0", 00:29:42.848 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:42.848 "eui64": "ABCDEF0123456789", 00:29:42.848 "uuid": "2cdb94fc-16d2-4122-90b6-98ea91e8f3ac" 00:29:42.848 } 00:29:42.848 ] 00:29:42.848 } 00:29:42.848 ] 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:42.848 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:42.848 [2024-07-15 12:19:32.699275] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:29:42.848 [2024-07-15 12:19:32.699308] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275667 ] 00:29:42.848 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.848 [2024-07-15 12:19:32.727775] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:42.848 [2024-07-15 12:19:32.727826] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:42.848 [2024-07-15 12:19:32.727831] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:42.848 [2024-07-15 12:19:32.727841] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:42.848 [2024-07-15 12:19:32.727847] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:42.848 [2024-07-15 12:19:32.728090] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:42.848 [2024-07-15 12:19:32.728117] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e58af0 0 00:29:42.848 [2024-07-15 12:19:32.741233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:42.848 [2024-07-15 12:19:32.741245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:42.848 [2024-07-15 12:19:32.741249] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:42.848 [2024-07-15 12:19:32.741252] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:42.848 [2024-07-15 12:19:32.741288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.848 [2024-07-15 12:19:32.741293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.848 [2024-07-15 12:19:32.741297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.848 [2024-07-15 12:19:32.741309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:42.848 [2024-07-15 12:19:32.741325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.848 [2024-07-15 12:19:32.748235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.848 [2024-07-15 12:19:32.748243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.848 [2024-07-15 12:19:32.748246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.848 [2024-07-15 12:19:32.748250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:42.848 [2024-07-15 12:19:32.748259] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:42.848 [2024-07-15 12:19:32.748265] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:42.848 [2024-07-15 12:19:32.748270] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:42.848 [2024-07-15 12:19:32.748282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.848 [2024-07-15 12:19:32.748286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.848 [2024-07-15 12:19:32.748289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.849 [2024-07-15 12:19:32.748297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.849 [2024-07-15 12:19:32.748310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.849 [2024-07-15 12:19:32.748463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.849 [2024-07-15 12:19:32.748469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.849 [2024-07-15 12:19:32.748472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:42.849 [2024-07-15 12:19:32.748481] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:42.849 [2024-07-15 12:19:32.748487] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:42.849 [2024-07-15 12:19:32.748493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748500] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.849 [2024-07-15 12:19:32.748509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.849 [2024-07-15 12:19:32.748519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.849 [2024-07-15 12:19:32.748590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.849 [2024-07-15 12:19:32.748596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.849 [2024-07-15 12:19:32.748599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:42.849 [2024-07-15 12:19:32.748607] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:42.849 [2024-07-15 12:19:32.748614] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:42.849 [2024-07-15 12:19:32.748620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.849 [2024-07-15 12:19:32.748632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.849 [2024-07-15 12:19:32.748642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.849 [2024-07-15 12:19:32.748706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.849 [2024-07-15 12:19:32.748712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.849 [2024-07-15 12:19:32.748715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:42.849 [2024-07-15 12:19:32.748723] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:42.849 [2024-07-15 12:19:32.748731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.849 [2024-07-15 12:19:32.748744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.849 [2024-07-15 12:19:32.748752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.849 [2024-07-15 12:19:32.748821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.849 [2024-07-15 12:19:32.748827] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.849 [2024-07-15 12:19:32.748830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:42.849 [2024-07-15 12:19:32.748837] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:42.849 [2024-07-15 12:19:32.748842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:42.849 [2024-07-15 12:19:32.748849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:42.849 [2024-07-15 12:19:32.748953] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:42.849 [2024-07-15 12:19:32.748958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:42.849 [2024-07-15 12:19:32.748967] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.748974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.849 [2024-07-15 12:19:32.748979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.849 [2024-07-15 12:19:32.748989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.849 [2024-07-15 12:19:32.749053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.849 [2024-07-15 12:19:32.749059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.849 [2024-07-15 12:19:32.749062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.749065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:42.849 [2024-07-15 12:19:32.749069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:42.849 [2024-07-15 12:19:32.749077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.749081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.749084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.849 [2024-07-15 12:19:32.749090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.849 [2024-07-15 12:19:32.749099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.849 [2024-07-15 12:19:32.749171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.849 [2024-07-15 12:19:32.749177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.849 [2024-07-15 12:19:32.749180] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.749183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:42.849 [2024-07-15 12:19:32.749187] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:42.849 [2024-07-15 12:19:32.749191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:42.849 [2024-07-15 12:19:32.749197] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:42.849 [2024-07-15 12:19:32.749205] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:42.849 [2024-07-15 12:19:32.749212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.849 [2024-07-15 12:19:32.749215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.849 [2024-07-15 12:19:32.749221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.849 [2024-07-15 12:19:32.749236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.849 [2024-07-15 12:19:32.749334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.850 [2024-07-15 12:19:32.749340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.850 [2024-07-15 12:19:32.749343] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.749346] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e58af0): datao=0, datal=4096, cccid=0 00:29:42.850 [2024-07-15 12:19:32.749350] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec5340) on tqpair(0x1e58af0): expected_datao=0, payload_size=4096 00:29:42.850 [2024-07-15 12:19:32.749354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.749376] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.749381] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.850 [2024-07-15 12:19:32.791244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.850 [2024-07-15 12:19:32.791247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:42.850 [2024-07-15 12:19:32.791259] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:42.850 [2024-07-15 12:19:32.791266] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:42.850 [2024-07-15 12:19:32.791271] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:42.850 [2024-07-15 12:19:32.791275] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:42.850 [2024-07-15 12:19:32.791280] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:42.850 [2024-07-15 12:19:32.791284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:42.850 [2024-07-15 12:19:32.791292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:42.850 [2024-07-15 12:19:32.791299] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.850 [2024-07-15 12:19:32.791313] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:42.850 [2024-07-15 12:19:32.791326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.850 [2024-07-15 12:19:32.791475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.850 [2024-07-15 12:19:32.791481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.850 [2024-07-15 12:19:32.791484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:42.850 [2024-07-15 12:19:32.791495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e58af0) 00:29:42.850 [2024-07-15 12:19:32.791507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.850 [2024-07-15 12:19:32.791512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791516] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e58af0) 00:29:42.850 [2024-07-15 12:19:32.791524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.850 [2024-07-15 12:19:32.791529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e58af0) 00:29:42.850 [2024-07-15 12:19:32.791541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.850 [2024-07-15 12:19:32.791548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:42.850 [2024-07-15 12:19:32.791560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.850 [2024-07-15 12:19:32.791564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:42.850 [2024-07-15 12:19:32.791575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:42.850 [2024-07-15 12:19:32.791581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791584] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e58af0) 00:29:42.850 [2024-07-15 12:19:32.791590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.850 [2024-07-15 12:19:32.791601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5340, cid 0, qid 0 00:29:42.850 [2024-07-15 12:19:32.791605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec54c0, cid 1, qid 0 00:29:42.850 [2024-07-15 12:19:32.791610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5640, cid 2, qid 0 00:29:42.850 [2024-07-15 12:19:32.791614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:42.850 [2024-07-15 12:19:32.791618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5940, cid 4, qid 0 00:29:42.850 [2024-07-15 12:19:32.791724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.850 [2024-07-15 12:19:32.791730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.850 [2024-07-15 12:19:32.791733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791737] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5940) on tqpair=0x1e58af0 00:29:42.850 [2024-07-15 12:19:32.791741] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:42.850 [2024-07-15 12:19:32.791746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:42.850 [2024-07-15 12:19:32.791755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e58af0) 00:29:42.850 [2024-07-15 12:19:32.791765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.850 [2024-07-15 12:19:32.791774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5940, cid 4, qid 0 00:29:42.850 [2024-07-15 12:19:32.791850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.850 [2024-07-15 12:19:32.791855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.850 [2024-07-15 12:19:32.791859] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791862] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e58af0): datao=0, datal=4096, cccid=4 00:29:42.850 [2024-07-15 12:19:32.791866] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec5940) on tqpair(0x1e58af0): expected_datao=0, payload_size=4096 00:29:42.850 [2024-07-15 12:19:32.791869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791910] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791914] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.850 [2024-07-15 12:19:32.791957] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.850 [2024-07-15 12:19:32.791965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.851 [2024-07-15 12:19:32.791968] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.791971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5940) on tqpair=0x1e58af0 00:29:42.851 [2024-07-15 12:19:32.791981] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:42.851 [2024-07-15 12:19:32.792004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.792008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e58af0) 00:29:42.851 [2024-07-15 12:19:32.792014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.851 [2024-07-15 12:19:32.792020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.792023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.792026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e58af0) 00:29:42.851 [2024-07-15 12:19:32.792032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.851 [2024-07-15 12:19:32.792045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5940, cid 4, qid 0 00:29:42.851 [2024-07-15 12:19:32.792050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5ac0, cid 5, qid 0 00:29:42.851 [2024-07-15 12:19:32.792152] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.851 [2024-07-15 12:19:32.792158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.851 [2024-07-15 12:19:32.792161] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.792164] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e58af0): datao=0, datal=1024, cccid=4 00:29:42.851 [2024-07-15 12:19:32.792168] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec5940) on tqpair(0x1e58af0): expected_datao=0, payload_size=1024 00:29:42.851 [2024-07-15 12:19:32.792172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.792178] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.792181] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.792186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.851 [2024-07-15 12:19:32.792190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.851 [2024-07-15 12:19:32.792193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.792197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5ac0) on tqpair=0x1e58af0 00:29:42.851 [2024-07-15 12:19:32.836231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.851 [2024-07-15 12:19:32.836240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.851 [2024-07-15 12:19:32.836244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.836247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5940) on tqpair=0x1e58af0 00:29:42.851 [2024-07-15 12:19:32.836257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.836260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e58af0) 00:29:42.851 [2024-07-15 12:19:32.836267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.851 [2024-07-15 12:19:32.836282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5940, cid 4, qid 0 00:29:42.851 [2024-07-15 12:19:32.836443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.851 [2024-07-15 12:19:32.836449] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.851 [2024-07-15 12:19:32.836452] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.836458] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e58af0): datao=0, datal=3072, cccid=4 00:29:42.851 [2024-07-15 12:19:32.836462] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec5940) on tqpair(0x1e58af0): expected_datao=0, payload_size=3072 00:29:42.851 [2024-07-15 12:19:32.836465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.836487] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.851 [2024-07-15 12:19:32.836491] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.116 [2024-07-15 12:19:32.877340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.116 [2024-07-15 12:19:32.877351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.116 [2024-07-15 12:19:32.877354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.116 [2024-07-15 12:19:32.877358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5940) on tqpair=0x1e58af0 00:29:43.116 [2024-07-15 12:19:32.877366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.116 [2024-07-15 12:19:32.877370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e58af0) 00:29:43.116 [2024-07-15 12:19:32.877376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.116 [2024-07-15 12:19:32.877391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec5940, cid 4, qid 0 00:29:43.116 [2024-07-15 12:19:32.877462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.116 [2024-07-15 12:19:32.877468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.116 [2024-07-15 12:19:32.877471] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.116 [2024-07-15 12:19:32.877474] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e58af0): datao=0, datal=8, cccid=4 00:29:43.116 [2024-07-15 12:19:32.877478] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ec5940) on tqpair(0x1e58af0): expected_datao=0, payload_size=8 00:29:43.116 [2024-07-15 12:19:32.877482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.116 [2024-07-15 12:19:32.877488] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.116 [2024-07-15 12:19:32.877491] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.116 [2024-07-15 12:19:32.922236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.116 [2024-07-15 12:19:32.922245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.116 [2024-07-15 12:19:32.922249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.116 [2024-07-15 12:19:32.922252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5940) on tqpair=0x1e58af0 00:29:43.116 ===================================================== 00:29:43.116 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:43.116 ===================================================== 00:29:43.116 Controller Capabilities/Features 00:29:43.116 ================================ 00:29:43.116 Vendor ID: 0000 00:29:43.116 Subsystem Vendor ID: 0000 00:29:43.116 Serial Number: .................... 00:29:43.116 Model Number: ........................................ 00:29:43.116 Firmware Version: 24.09 00:29:43.116 Recommended Arb Burst: 0 00:29:43.116 IEEE OUI Identifier: 00 00 00 00:29:43.116 Multi-path I/O 00:29:43.116 May have multiple subsystem ports: No 00:29:43.116 May have multiple controllers: No 00:29:43.116 Associated with SR-IOV VF: No 00:29:43.116 Max Data Transfer Size: 131072 00:29:43.116 Max Number of Namespaces: 0 00:29:43.116 Max Number of I/O Queues: 1024 00:29:43.116 NVMe Specification Version (VS): 1.3 00:29:43.116 NVMe Specification Version (Identify): 1.3 00:29:43.116 Maximum Queue Entries: 128 00:29:43.116 Contiguous Queues Required: Yes 00:29:43.116 Arbitration Mechanisms Supported 00:29:43.116 Weighted Round Robin: Not Supported 00:29:43.116 Vendor Specific: Not Supported 00:29:43.116 Reset Timeout: 15000 ms 00:29:43.116 Doorbell Stride: 4 bytes 00:29:43.116 NVM Subsystem Reset: Not Supported 00:29:43.116 Command Sets Supported 00:29:43.116 NVM Command Set: Supported 00:29:43.116 Boot Partition: Not Supported 00:29:43.116 Memory Page Size Minimum: 4096 bytes 00:29:43.116 Memory Page Size Maximum: 4096 bytes 00:29:43.116 Persistent Memory Region: Not Supported 00:29:43.116 Optional Asynchronous Events Supported 00:29:43.116 Namespace Attribute Notices: Not Supported 00:29:43.116 Firmware Activation Notices: Not Supported 00:29:43.116 ANA Change Notices: Not Supported 00:29:43.116 PLE Aggregate Log Change Notices: Not Supported 00:29:43.116 LBA Status Info Alert Notices: Not Supported 00:29:43.116 EGE Aggregate Log Change Notices: Not Supported 00:29:43.116 Normal NVM Subsystem Shutdown event: Not Supported 00:29:43.116 Zone Descriptor Change Notices: Not Supported 00:29:43.116 Discovery Log Change Notices: Supported 00:29:43.116 Controller Attributes 00:29:43.116 128-bit Host Identifier: Not Supported 00:29:43.116 Non-Operational Permissive Mode: Not Supported 00:29:43.116 NVM Sets: Not Supported 00:29:43.116 Read Recovery Levels: Not Supported 00:29:43.116 Endurance Groups: Not Supported 00:29:43.116 Predictable Latency Mode: Not Supported 00:29:43.116 Traffic Based Keep ALive: Not Supported 00:29:43.116 Namespace Granularity: Not Supported 00:29:43.116 SQ Associations: Not Supported 00:29:43.116 UUID List: Not Supported 00:29:43.116 Multi-Domain Subsystem: Not Supported 00:29:43.116 Fixed Capacity Management: Not Supported 00:29:43.116 Variable Capacity Management: Not Supported 00:29:43.116 Delete Endurance Group: Not Supported 00:29:43.116 Delete NVM Set: Not Supported 00:29:43.116 Extended LBA Formats Supported: Not Supported 00:29:43.139 Flexible Data Placement Supported: Not Supported 00:29:43.139 00:29:43.139 Controller Memory Buffer Support 00:29:43.139 ================================ 00:29:43.139 Supported: No 00:29:43.139 00:29:43.139 Persistent Memory Region Support 00:29:43.139 ================================ 00:29:43.139 Supported: No 00:29:43.139 00:29:43.139 Admin Command Set Attributes 00:29:43.139 ============================ 00:29:43.139 Security Send/Receive: Not Supported 00:29:43.139 Format NVM: Not Supported 00:29:43.139 Firmware Activate/Download: Not Supported 00:29:43.139 Namespace Management: Not Supported 00:29:43.139 Device Self-Test: Not Supported 00:29:43.139 Directives: Not Supported 00:29:43.139 NVMe-MI: Not Supported 00:29:43.139 Virtualization Management: Not Supported 00:29:43.139 Doorbell Buffer Config: Not Supported 00:29:43.139 Get LBA Status Capability: Not Supported 00:29:43.139 Command & Feature Lockdown Capability: Not Supported 00:29:43.139 Abort Command Limit: 1 00:29:43.139 Async Event Request Limit: 4 00:29:43.139 Number of Firmware Slots: N/A 00:29:43.139 Firmware Slot 1 Read-Only: N/A 00:29:43.139 Firmware Activation Without Reset: N/A 00:29:43.139 Multiple Update Detection Support: N/A 00:29:43.139 Firmware Update Granularity: No Information Provided 00:29:43.139 Per-Namespace SMART Log: No 00:29:43.139 Asymmetric Namespace Access Log Page: Not Supported 00:29:43.139 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:43.139 Command Effects Log Page: Not Supported 00:29:43.139 Get Log Page Extended Data: Supported 00:29:43.139 Telemetry Log Pages: Not Supported 00:29:43.139 Persistent Event Log Pages: Not Supported 00:29:43.139 Supported Log Pages Log Page: May Support 00:29:43.139 Commands Supported & Effects Log Page: Not Supported 00:29:43.139 Feature Identifiers & Effects Log Page:May Support 00:29:43.139 NVMe-MI Commands & Effects Log Page: May Support 00:29:43.139 Data Area 4 for Telemetry Log: Not Supported 00:29:43.139 Error Log Page Entries Supported: 128 00:29:43.139 Keep Alive: Not Supported 00:29:43.139 00:29:43.139 NVM Command Set Attributes 00:29:43.139 ========================== 00:29:43.139 Submission Queue Entry Size 00:29:43.139 Max: 1 00:29:43.139 Min: 1 00:29:43.139 Completion Queue Entry Size 00:29:43.139 Max: 1 00:29:43.139 Min: 1 00:29:43.139 Number of Namespaces: 0 00:29:43.139 Compare Command: Not Supported 00:29:43.139 Write Uncorrectable Command: Not Supported 00:29:43.139 Dataset Management Command: Not Supported 00:29:43.139 Write Zeroes Command: Not Supported 00:29:43.139 Set Features Save Field: Not Supported 00:29:43.139 Reservations: Not Supported 00:29:43.139 Timestamp: Not Supported 00:29:43.139 Copy: Not Supported 00:29:43.139 Volatile Write Cache: Not Present 00:29:43.139 Atomic Write Unit (Normal): 1 00:29:43.139 Atomic Write Unit (PFail): 1 00:29:43.139 Atomic Compare & Write Unit: 1 00:29:43.139 Fused Compare & Write: Supported 00:29:43.139 Scatter-Gather List 00:29:43.139 SGL Command Set: Supported 00:29:43.139 SGL Keyed: Supported 00:29:43.139 SGL Bit Bucket Descriptor: Not Supported 00:29:43.139 SGL Metadata Pointer: Not Supported 00:29:43.139 Oversized SGL: Not Supported 00:29:43.139 SGL Metadata Address: Not Supported 00:29:43.139 SGL Offset: Supported 00:29:43.139 Transport SGL Data Block: Not Supported 00:29:43.139 Replay Protected Memory Block: Not Supported 00:29:43.139 00:29:43.139 Firmware Slot Information 00:29:43.139 ========================= 00:29:43.139 Active slot: 0 00:29:43.139 00:29:43.139 00:29:43.139 Error Log 00:29:43.139 ========= 00:29:43.139 00:29:43.139 Active Namespaces 00:29:43.139 ================= 00:29:43.139 Discovery Log Page 00:29:43.139 ================== 00:29:43.139 Generation Counter: 2 00:29:43.139 Number of Records: 2 00:29:43.139 Record Format: 0 00:29:43.139 00:29:43.139 Discovery Log Entry 0 00:29:43.139 ---------------------- 00:29:43.139 Transport Type: 3 (TCP) 00:29:43.139 Address Family: 1 (IPv4) 00:29:43.139 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:43.139 Entry Flags: 00:29:43.139 Duplicate Returned Information: 1 00:29:43.139 Explicit Persistent Connection Support for Discovery: 1 00:29:43.139 Transport Requirements: 00:29:43.139 Secure Channel: Not Required 00:29:43.139 Port ID: 0 (0x0000) 00:29:43.139 Controller ID: 65535 (0xffff) 00:29:43.139 Admin Max SQ Size: 128 00:29:43.139 Transport Service Identifier: 4420 00:29:43.139 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:43.139 Transport Address: 10.0.0.2 00:29:43.139 Discovery Log Entry 1 00:29:43.139 ---------------------- 00:29:43.139 Transport Type: 3 (TCP) 00:29:43.139 Address Family: 1 (IPv4) 00:29:43.139 Subsystem Type: 2 (NVM Subsystem) 00:29:43.139 Entry Flags: 00:29:43.139 Duplicate Returned Information: 0 00:29:43.139 Explicit Persistent Connection Support for Discovery: 0 00:29:43.139 Transport Requirements: 00:29:43.139 Secure Channel: Not Required 00:29:43.139 Port ID: 0 (0x0000) 00:29:43.139 Controller ID: 65535 (0xffff) 00:29:43.139 Admin Max SQ Size: 128 00:29:43.139 Transport Service Identifier: 4420 00:29:43.139 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:43.139 Transport Address: 10.0.0.2 [2024-07-15 12:19:32.922332] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:43.139 [2024-07-15 12:19:32.922343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5340) on tqpair=0x1e58af0 00:29:43.139 [2024-07-15 12:19:32.922349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.139 [2024-07-15 12:19:32.922353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec54c0) on tqpair=0x1e58af0 00:29:43.139 [2024-07-15 12:19:32.922357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.139 [2024-07-15 12:19:32.922362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec5640) on tqpair=0x1e58af0 00:29:43.139 [2024-07-15 12:19:32.922366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.139 [2024-07-15 12:19:32.922370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.139 [2024-07-15 12:19:32.922374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.139 [2024-07-15 12:19:32.922385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.139 [2024-07-15 12:19:32.922389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.139 [2024-07-15 12:19:32.922392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.139 [2024-07-15 12:19:32.922399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.139 [2024-07-15 12:19:32.922413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.139 [2024-07-15 12:19:32.922482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.139 [2024-07-15 12:19:32.922488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.139 [2024-07-15 12:19:32.922491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.139 [2024-07-15 12:19:32.922495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.139 [2024-07-15 12:19:32.922501] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.139 [2024-07-15 12:19:32.922505] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.139 [2024-07-15 12:19:32.922508] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.139 [2024-07-15 12:19:32.922514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.139 [2024-07-15 12:19:32.922526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.139 [2024-07-15 12:19:32.922608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.139 [2024-07-15 12:19:32.922614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.139 [2024-07-15 12:19:32.922617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.139 [2024-07-15 12:19:32.922620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.139 [2024-07-15 12:19:32.922624] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:43.139 [2024-07-15 12:19:32.922629] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:43.139 [2024-07-15 12:19:32.922637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.139 [2024-07-15 12:19:32.922640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.139 [2024-07-15 12:19:32.922644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.139 [2024-07-15 12:19:32.922649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.922658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.922726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.922732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.922735] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.922738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.922747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.922751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.922754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.922760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.922769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.922839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.922845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.922848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.922853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.922861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.922865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.922868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.922874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.922883] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.922951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.922957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.922960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.922963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.922971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.922975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.922978] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.922984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.922993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.923064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.923070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.923073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.923085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923092] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.923097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.923107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.923173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.923178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.923181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.923193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.923205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.923215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.923283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.923289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.923292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.923308] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.923321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.923330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.923397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.923403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.923406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.923417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.923430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.923439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.923505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.923511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.923514] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.923526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.923538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.923548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.923615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.923621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.923624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.923636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.923649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.923658] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.923727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.923733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.923736] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.923748] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.923762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.923771] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.923849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.923855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.923858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.923869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.923882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.923891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.923958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.923963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.923966] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.923978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.923985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.923990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.923999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.924069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.924074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.924077] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.924080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.140 [2024-07-15 12:19:32.924089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.924093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.924096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.140 [2024-07-15 12:19:32.924101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.140 [2024-07-15 12:19:32.924110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.140 [2024-07-15 12:19:32.924177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.140 [2024-07-15 12:19:32.924183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.140 [2024-07-15 12:19:32.924186] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.140 [2024-07-15 12:19:32.924189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.924197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.924212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.924221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.924290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.924296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.924299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.924311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.924323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.924333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.924406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.924413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.924416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.924430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.924446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.924457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.924524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.924530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.924533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.924545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.924560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.924569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.924635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.924641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.924644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.924655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.924669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.924678] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.924759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.924765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.924768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.924779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.924792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.924800] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.924877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.924882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.924885] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924888] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.924897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.924904] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.924909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.924918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.924992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.924999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.925003] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925006] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.925014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.925028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.925038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.925106] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.925112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.925115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.925127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.925140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.925151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.925232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.925238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.925241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.925253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.925267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.925277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.925349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.925356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.925358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.925369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.925383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.925393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.925466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.925472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.925475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925478] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.925486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.925499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.925507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.925575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.925583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.925586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.925598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.925610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.925623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.925700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.925706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.925711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.141 [2024-07-15 12:19:32.925723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.141 [2024-07-15 12:19:32.925732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.141 [2024-07-15 12:19:32.925739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.141 [2024-07-15 12:19:32.925748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.141 [2024-07-15 12:19:32.925818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.141 [2024-07-15 12:19:32.925824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.141 [2024-07-15 12:19:32.925827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.925830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.142 [2024-07-15 12:19:32.925838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.925842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.925845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.142 [2024-07-15 12:19:32.925851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.142 [2024-07-15 12:19:32.925860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.142 [2024-07-15 12:19:32.925934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.142 [2024-07-15 12:19:32.925939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.142 [2024-07-15 12:19:32.925942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.925945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.142 [2024-07-15 12:19:32.925954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.925958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.925961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.142 [2024-07-15 12:19:32.925966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.142 [2024-07-15 12:19:32.925975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.142 [2024-07-15 12:19:32.926043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.142 [2024-07-15 12:19:32.926048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.142 [2024-07-15 12:19:32.926051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.926055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.142 [2024-07-15 12:19:32.926063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.926067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.926070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.142 [2024-07-15 12:19:32.926076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.142 [2024-07-15 12:19:32.926085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.142 [2024-07-15 12:19:32.926151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.142 [2024-07-15 12:19:32.926157] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.142 [2024-07-15 12:19:32.926160] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.926164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.142 [2024-07-15 12:19:32.926172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.926175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.926179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.142 [2024-07-15 12:19:32.926184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.142 [2024-07-15 12:19:32.926193] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.142 [2024-07-15 12:19:32.930233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.142 [2024-07-15 12:19:32.930241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.142 [2024-07-15 12:19:32.930244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.930247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.142 [2024-07-15 12:19:32.930257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.930260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.930263] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e58af0) 00:29:43.142 [2024-07-15 12:19:32.930269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.142 [2024-07-15 12:19:32.930281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ec57c0, cid 3, qid 0 00:29:43.142 [2024-07-15 12:19:32.930358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.142 [2024-07-15 12:19:32.930364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.142 [2024-07-15 12:19:32.930367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:32.930370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ec57c0) on tqpair=0x1e58af0 00:29:43.142 [2024-07-15 12:19:32.930376] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:29:43.142 00:29:43.142 12:19:32 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:43.142 [2024-07-15 12:19:32.967410] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:29:43.142 [2024-07-15 12:19:32.967452] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275675 ] 00:29:43.142 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.142 [2024-07-15 12:19:32.997473] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:43.142 [2024-07-15 12:19:32.997517] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:43.142 [2024-07-15 12:19:32.997521] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:43.142 [2024-07-15 12:19:32.997532] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:43.142 [2024-07-15 12:19:32.997538] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:43.142 [2024-07-15 12:19:32.997856] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:43.142 [2024-07-15 12:19:32.997879] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f6eaf0 0 00:29:43.142 [2024-07-15 12:19:33.012235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:43.142 [2024-07-15 12:19:33.012247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:43.142 [2024-07-15 12:19:33.012251] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:43.142 [2024-07-15 12:19:33.012254] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:43.142 [2024-07-15 12:19:33.012281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.012287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.012290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.142 [2024-07-15 12:19:33.012300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:43.142 [2024-07-15 12:19:33.012316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.142 [2024-07-15 12:19:33.020236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.142 [2024-07-15 12:19:33.020245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.142 [2024-07-15 12:19:33.020248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.020252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.142 [2024-07-15 12:19:33.020259] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:43.142 [2024-07-15 12:19:33.020265] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:43.142 [2024-07-15 12:19:33.020269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:43.142 [2024-07-15 12:19:33.020280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.020284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.020287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.142 [2024-07-15 12:19:33.020293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.142 [2024-07-15 12:19:33.020305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.142 [2024-07-15 12:19:33.020398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.142 [2024-07-15 12:19:33.020404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.142 [2024-07-15 12:19:33.020407] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.020411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.142 [2024-07-15 12:19:33.020415] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:43.142 [2024-07-15 12:19:33.020421] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:43.142 [2024-07-15 12:19:33.020428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.020431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.020434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.142 [2024-07-15 12:19:33.020440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.142 [2024-07-15 12:19:33.020450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.142 [2024-07-15 12:19:33.020519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.142 [2024-07-15 12:19:33.020527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.142 [2024-07-15 12:19:33.020531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.020534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.142 [2024-07-15 12:19:33.020538] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:43.142 [2024-07-15 12:19:33.020545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:43.142 [2024-07-15 12:19:33.020551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.142 [2024-07-15 12:19:33.020555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.020558] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.020564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-07-15 12:19:33.020574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.143 [2024-07-15 12:19:33.020644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.143 [2024-07-15 12:19:33.020649] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.143 [2024-07-15 12:19:33.020652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.020656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.143 [2024-07-15 12:19:33.020660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:43.143 [2024-07-15 12:19:33.020668] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.020672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.020675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.020680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-07-15 12:19:33.020690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.143 [2024-07-15 12:19:33.020755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.143 [2024-07-15 12:19:33.020760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.143 [2024-07-15 12:19:33.020763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.020767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.143 [2024-07-15 12:19:33.020770] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:43.143 [2024-07-15 12:19:33.020775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:43.143 [2024-07-15 12:19:33.020781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:43.143 [2024-07-15 12:19:33.020886] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:43.143 [2024-07-15 12:19:33.020889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:43.143 [2024-07-15 12:19:33.020895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.020899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.020902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.020907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-07-15 12:19:33.020919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.143 [2024-07-15 12:19:33.020988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.143 [2024-07-15 12:19:33.020994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.143 [2024-07-15 12:19:33.020996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.021000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.143 [2024-07-15 12:19:33.021004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:43.143 [2024-07-15 12:19:33.021012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.021015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.021018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.021024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-07-15 12:19:33.021033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.143 [2024-07-15 12:19:33.021107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.143 [2024-07-15 12:19:33.021113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.143 [2024-07-15 12:19:33.021116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.021119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.143 [2024-07-15 12:19:33.021123] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:43.143 [2024-07-15 12:19:33.021127] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:43.143 [2024-07-15 12:19:33.021133] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:43.143 [2024-07-15 12:19:33.021141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:43.143 [2024-07-15 12:19:33.021148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.021152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.021157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-07-15 12:19:33.021167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.143 [2024-07-15 12:19:33.021269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.143 [2024-07-15 12:19:33.021275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.143 [2024-07-15 12:19:33.021278] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.021282] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6eaf0): datao=0, datal=4096, cccid=0 00:29:43.143 [2024-07-15 12:19:33.021285] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdb340) on tqpair(0x1f6eaf0): expected_datao=0, payload_size=4096 00:29:43.143 [2024-07-15 12:19:33.021289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.021309] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.021313] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.143 [2024-07-15 12:19:33.065245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.143 [2024-07-15 12:19:33.065248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.143 [2024-07-15 12:19:33.065262] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:43.143 [2024-07-15 12:19:33.065269] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:43.143 [2024-07-15 12:19:33.065273] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:43.143 [2024-07-15 12:19:33.065277] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:43.143 [2024-07-15 12:19:33.065280] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:43.143 [2024-07-15 12:19:33.065285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:43.143 [2024-07-15 12:19:33.065293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:43.143 [2024-07-15 12:19:33.065300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.065314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:43.143 [2024-07-15 12:19:33.065326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.143 [2024-07-15 12:19:33.065408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.143 [2024-07-15 12:19:33.065414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.143 [2024-07-15 12:19:33.065417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.143 [2024-07-15 12:19:33.065426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.065438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.143 [2024-07-15 12:19:33.065444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065447] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.065455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.143 [2024-07-15 12:19:33.065460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065464] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065467] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.065472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.143 [2024-07-15 12:19:33.065477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.065488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.143 [2024-07-15 12:19:33.065492] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:43.143 [2024-07-15 12:19:33.065503] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:43.143 [2024-07-15 12:19:33.065509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6eaf0) 00:29:43.143 [2024-07-15 12:19:33.065518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-07-15 12:19:33.065529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb340, cid 0, qid 0 00:29:43.143 [2024-07-15 12:19:33.065534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb4c0, cid 1, qid 0 00:29:43.143 [2024-07-15 12:19:33.065538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb640, cid 2, qid 0 00:29:43.143 [2024-07-15 12:19:33.065542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.143 [2024-07-15 12:19:33.065547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb940, cid 4, qid 0 00:29:43.143 [2024-07-15 12:19:33.065652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.143 [2024-07-15 12:19:33.065658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.143 [2024-07-15 12:19:33.065661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb940) on tqpair=0x1f6eaf0 00:29:43.143 [2024-07-15 12:19:33.065668] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:43.143 [2024-07-15 12:19:33.065673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:43.143 [2024-07-15 12:19:33.065680] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:43.143 [2024-07-15 12:19:33.065686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:43.143 [2024-07-15 12:19:33.065691] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.143 [2024-07-15 12:19:33.065698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.065704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:43.144 [2024-07-15 12:19:33.065713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb940, cid 4, qid 0 00:29:43.144 [2024-07-15 12:19:33.065783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.144 [2024-07-15 12:19:33.065789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.144 [2024-07-15 12:19:33.065792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.065795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb940) on tqpair=0x1f6eaf0 00:29:43.144 [2024-07-15 12:19:33.065846] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.065854] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.065861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.065864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.065870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-07-15 12:19:33.065882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb940, cid 4, qid 0 00:29:43.144 [2024-07-15 12:19:33.065961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.144 [2024-07-15 12:19:33.065967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.144 [2024-07-15 12:19:33.065970] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.065973] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6eaf0): datao=0, datal=4096, cccid=4 00:29:43.144 [2024-07-15 12:19:33.065977] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdb940) on tqpair(0x1f6eaf0): expected_datao=0, payload_size=4096 00:29:43.144 [2024-07-15 12:19:33.065981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.065987] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.065990] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.144 [2024-07-15 12:19:33.066034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.144 [2024-07-15 12:19:33.066037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb940) on tqpair=0x1f6eaf0 00:29:43.144 [2024-07-15 12:19:33.066049] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:43.144 [2024-07-15 12:19:33.066062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.066086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-07-15 12:19:33.066096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb940, cid 4, qid 0 00:29:43.144 [2024-07-15 12:19:33.066185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.144 [2024-07-15 12:19:33.066191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.144 [2024-07-15 12:19:33.066194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066197] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6eaf0): datao=0, datal=4096, cccid=4 00:29:43.144 [2024-07-15 12:19:33.066201] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdb940) on tqpair(0x1f6eaf0): expected_datao=0, payload_size=4096 00:29:43.144 [2024-07-15 12:19:33.066204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066210] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066213] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.144 [2024-07-15 12:19:33.066246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.144 [2024-07-15 12:19:33.066249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb940) on tqpair=0x1f6eaf0 00:29:43.144 [2024-07-15 12:19:33.066263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.066288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-07-15 12:19:33.066298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb940, cid 4, qid 0 00:29:43.144 [2024-07-15 12:19:33.066374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.144 [2024-07-15 12:19:33.066380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.144 [2024-07-15 12:19:33.066382] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066386] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6eaf0): datao=0, datal=4096, cccid=4 00:29:43.144 [2024-07-15 12:19:33.066390] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdb940) on tqpair(0x1f6eaf0): expected_datao=0, payload_size=4096 00:29:43.144 [2024-07-15 12:19:33.066393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066399] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066403] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.144 [2024-07-15 12:19:33.066424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.144 [2024-07-15 12:19:33.066427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb940) on tqpair=0x1f6eaf0 00:29:43.144 [2024-07-15 12:19:33.066437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066456] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066460] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066469] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:43.144 [2024-07-15 12:19:33.066473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:43.144 [2024-07-15 12:19:33.066478] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:43.144 [2024-07-15 12:19:33.066491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.066500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-07-15 12:19:33.066506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066512] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.066517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:43.144 [2024-07-15 12:19:33.066529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb940, cid 4, qid 0 00:29:43.144 [2024-07-15 12:19:33.066535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdbac0, cid 5, qid 0 00:29:43.144 [2024-07-15 12:19:33.066622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.144 [2024-07-15 12:19:33.066627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.144 [2024-07-15 12:19:33.066630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb940) on tqpair=0x1f6eaf0 00:29:43.144 [2024-07-15 12:19:33.066639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.144 [2024-07-15 12:19:33.066644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.144 [2024-07-15 12:19:33.066647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdbac0) on tqpair=0x1f6eaf0 00:29:43.144 [2024-07-15 12:19:33.066658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.066668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-07-15 12:19:33.066677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdbac0, cid 5, qid 0 00:29:43.144 [2024-07-15 12:19:33.066748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.144 [2024-07-15 12:19:33.066753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.144 [2024-07-15 12:19:33.066756] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdbac0) on tqpair=0x1f6eaf0 00:29:43.144 [2024-07-15 12:19:33.066767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.066776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-07-15 12:19:33.066785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdbac0, cid 5, qid 0 00:29:43.144 [2024-07-15 12:19:33.066875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.144 [2024-07-15 12:19:33.066880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.144 [2024-07-15 12:19:33.066883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdbac0) on tqpair=0x1f6eaf0 00:29:43.144 [2024-07-15 12:19:33.066895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.066904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-07-15 12:19:33.066914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdbac0, cid 5, qid 0 00:29:43.144 [2024-07-15 12:19:33.066982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.144 [2024-07-15 12:19:33.066988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.144 [2024-07-15 12:19:33.066991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.066994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdbac0) on tqpair=0x1f6eaf0 00:29:43.144 [2024-07-15 12:19:33.067006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.067010] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.067015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-07-15 12:19:33.067023] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.144 [2024-07-15 12:19:33.067026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f6eaf0) 00:29:43.144 [2024-07-15 12:19:33.067032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.145 [2024-07-15 12:19:33.067037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f6eaf0) 00:29:43.145 [2024-07-15 12:19:33.067046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.145 [2024-07-15 12:19:33.067052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f6eaf0) 00:29:43.145 [2024-07-15 12:19:33.067060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.145 [2024-07-15 12:19:33.067071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdbac0, cid 5, qid 0 00:29:43.145 [2024-07-15 12:19:33.067076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb940, cid 4, qid 0 00:29:43.145 [2024-07-15 12:19:33.067080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdbc40, cid 6, qid 0 00:29:43.145 [2024-07-15 12:19:33.067084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdbdc0, cid 7, qid 0 00:29:43.145 [2024-07-15 12:19:33.067238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.145 [2024-07-15 12:19:33.067245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.145 [2024-07-15 12:19:33.067248] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067251] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6eaf0): datao=0, datal=8192, cccid=5 00:29:43.145 [2024-07-15 12:19:33.067255] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdbac0) on tqpair(0x1f6eaf0): expected_datao=0, payload_size=8192 00:29:43.145 [2024-07-15 12:19:33.067259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067293] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067297] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.145 [2024-07-15 12:19:33.067310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.145 [2024-07-15 12:19:33.067313] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067316] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6eaf0): datao=0, datal=512, cccid=4 00:29:43.145 [2024-07-15 12:19:33.067320] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdb940) on tqpair(0x1f6eaf0): expected_datao=0, payload_size=512 00:29:43.145 [2024-07-15 12:19:33.067324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067329] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067333] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.145 [2024-07-15 12:19:33.067343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.145 [2024-07-15 12:19:33.067345] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067349] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6eaf0): datao=0, datal=512, cccid=6 00:29:43.145 [2024-07-15 12:19:33.067353] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdbc40) on tqpair(0x1f6eaf0): expected_datao=0, payload_size=512 00:29:43.145 [2024-07-15 12:19:33.067358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067364] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067367] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:43.145 [2024-07-15 12:19:33.067376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:43.145 [2024-07-15 12:19:33.067379] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067382] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f6eaf0): datao=0, datal=4096, cccid=7 00:29:43.145 [2024-07-15 12:19:33.067386] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fdbdc0) on tqpair(0x1f6eaf0): expected_datao=0, payload_size=4096 00:29:43.145 [2024-07-15 12:19:33.067390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067396] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067399] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.145 [2024-07-15 12:19:33.067411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.145 [2024-07-15 12:19:33.067414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdbac0) on tqpair=0x1f6eaf0 00:29:43.145 [2024-07-15 12:19:33.067427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.145 [2024-07-15 12:19:33.067432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.145 [2024-07-15 12:19:33.067435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb940) on tqpair=0x1f6eaf0 00:29:43.145 [2024-07-15 12:19:33.067447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.145 [2024-07-15 12:19:33.067452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.145 [2024-07-15 12:19:33.067456] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdbc40) on tqpair=0x1f6eaf0 00:29:43.145 [2024-07-15 12:19:33.067465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.145 [2024-07-15 12:19:33.067470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.145 [2024-07-15 12:19:33.067474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.145 [2024-07-15 12:19:33.067477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdbdc0) on tqpair=0x1f6eaf0 00:29:43.145 ===================================================== 00:29:43.145 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.145 ===================================================== 00:29:43.145 Controller Capabilities/Features 00:29:43.145 ================================ 00:29:43.145 Vendor ID: 8086 00:29:43.145 Subsystem Vendor ID: 8086 00:29:43.145 Serial Number: SPDK00000000000001 00:29:43.145 Model Number: SPDK bdev Controller 00:29:43.145 Firmware Version: 24.09 00:29:43.145 Recommended Arb Burst: 6 00:29:43.145 IEEE OUI Identifier: e4 d2 5c 00:29:43.145 Multi-path I/O 00:29:43.145 May have multiple subsystem ports: Yes 00:29:43.145 May have multiple controllers: Yes 00:29:43.145 Associated with SR-IOV VF: No 00:29:43.145 Max Data Transfer Size: 131072 00:29:43.145 Max Number of Namespaces: 32 00:29:43.145 Max Number of I/O Queues: 127 00:29:43.145 NVMe Specification Version (VS): 1.3 00:29:43.145 NVMe Specification Version (Identify): 1.3 00:29:43.145 Maximum Queue Entries: 128 00:29:43.145 Contiguous Queues Required: Yes 00:29:43.145 Arbitration Mechanisms Supported 00:29:43.145 Weighted Round Robin: Not Supported 00:29:43.145 Vendor Specific: Not Supported 00:29:43.145 Reset Timeout: 15000 ms 00:29:43.145 Doorbell Stride: 4 bytes 00:29:43.145 NVM Subsystem Reset: Not Supported 00:29:43.145 Command Sets Supported 00:29:43.145 NVM Command Set: Supported 00:29:43.145 Boot Partition: Not Supported 00:29:43.145 Memory Page Size Minimum: 4096 bytes 00:29:43.145 Memory Page Size Maximum: 4096 bytes 00:29:43.145 Persistent Memory Region: Not Supported 00:29:43.145 Optional Asynchronous Events Supported 00:29:43.145 Namespace Attribute Notices: Supported 00:29:43.145 Firmware Activation Notices: Not Supported 00:29:43.145 ANA Change Notices: Not Supported 00:29:43.145 PLE Aggregate Log Change Notices: Not Supported 00:29:43.145 LBA Status Info Alert Notices: Not Supported 00:29:43.145 EGE Aggregate Log Change Notices: Not Supported 00:29:43.145 Normal NVM Subsystem Shutdown event: Not Supported 00:29:43.145 Zone Descriptor Change Notices: Not Supported 00:29:43.145 Discovery Log Change Notices: Not Supported 00:29:43.145 Controller Attributes 00:29:43.145 128-bit Host Identifier: Supported 00:29:43.145 Non-Operational Permissive Mode: Not Supported 00:29:43.145 NVM Sets: Not Supported 00:29:43.145 Read Recovery Levels: Not Supported 00:29:43.145 Endurance Groups: Not Supported 00:29:43.145 Predictable Latency Mode: Not Supported 00:29:43.145 Traffic Based Keep ALive: Not Supported 00:29:43.145 Namespace Granularity: Not Supported 00:29:43.145 SQ Associations: Not Supported 00:29:43.145 UUID List: Not Supported 00:29:43.145 Multi-Domain Subsystem: Not Supported 00:29:43.145 Fixed Capacity Management: Not Supported 00:29:43.145 Variable Capacity Management: Not Supported 00:29:43.145 Delete Endurance Group: Not Supported 00:29:43.145 Delete NVM Set: Not Supported 00:29:43.145 Extended LBA Formats Supported: Not Supported 00:29:43.145 Flexible Data Placement Supported: Not Supported 00:29:43.145 00:29:43.145 Controller Memory Buffer Support 00:29:43.145 ================================ 00:29:43.145 Supported: No 00:29:43.145 00:29:43.145 Persistent Memory Region Support 00:29:43.145 ================================ 00:29:43.145 Supported: No 00:29:43.145 00:29:43.145 Admin Command Set Attributes 00:29:43.145 ============================ 00:29:43.145 Security Send/Receive: Not Supported 00:29:43.145 Format NVM: Not Supported 00:29:43.145 Firmware Activate/Download: Not Supported 00:29:43.145 Namespace Management: Not Supported 00:29:43.145 Device Self-Test: Not Supported 00:29:43.145 Directives: Not Supported 00:29:43.145 NVMe-MI: Not Supported 00:29:43.145 Virtualization Management: Not Supported 00:29:43.145 Doorbell Buffer Config: Not Supported 00:29:43.145 Get LBA Status Capability: Not Supported 00:29:43.145 Command & Feature Lockdown Capability: Not Supported 00:29:43.145 Abort Command Limit: 4 00:29:43.145 Async Event Request Limit: 4 00:29:43.145 Number of Firmware Slots: N/A 00:29:43.145 Firmware Slot 1 Read-Only: N/A 00:29:43.145 Firmware Activation Without Reset: N/A 00:29:43.145 Multiple Update Detection Support: N/A 00:29:43.145 Firmware Update Granularity: No Information Provided 00:29:43.145 Per-Namespace SMART Log: No 00:29:43.145 Asymmetric Namespace Access Log Page: Not Supported 00:29:43.145 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:43.145 Command Effects Log Page: Supported 00:29:43.145 Get Log Page Extended Data: Supported 00:29:43.145 Telemetry Log Pages: Not Supported 00:29:43.145 Persistent Event Log Pages: Not Supported 00:29:43.145 Supported Log Pages Log Page: May Support 00:29:43.145 Commands Supported & Effects Log Page: Not Supported 00:29:43.145 Feature Identifiers & Effects Log Page:May Support 00:29:43.145 NVMe-MI Commands & Effects Log Page: May Support 00:29:43.145 Data Area 4 for Telemetry Log: Not Supported 00:29:43.145 Error Log Page Entries Supported: 128 00:29:43.145 Keep Alive: Supported 00:29:43.145 Keep Alive Granularity: 10000 ms 00:29:43.145 00:29:43.145 NVM Command Set Attributes 00:29:43.145 ========================== 00:29:43.145 Submission Queue Entry Size 00:29:43.145 Max: 64 00:29:43.145 Min: 64 00:29:43.145 Completion Queue Entry Size 00:29:43.145 Max: 16 00:29:43.145 Min: 16 00:29:43.145 Number of Namespaces: 32 00:29:43.145 Compare Command: Supported 00:29:43.145 Write Uncorrectable Command: Not Supported 00:29:43.146 Dataset Management Command: Supported 00:29:43.146 Write Zeroes Command: Supported 00:29:43.146 Set Features Save Field: Not Supported 00:29:43.146 Reservations: Supported 00:29:43.146 Timestamp: Not Supported 00:29:43.146 Copy: Supported 00:29:43.146 Volatile Write Cache: Present 00:29:43.146 Atomic Write Unit (Normal): 1 00:29:43.146 Atomic Write Unit (PFail): 1 00:29:43.146 Atomic Compare & Write Unit: 1 00:29:43.146 Fused Compare & Write: Supported 00:29:43.146 Scatter-Gather List 00:29:43.146 SGL Command Set: Supported 00:29:43.146 SGL Keyed: Supported 00:29:43.146 SGL Bit Bucket Descriptor: Not Supported 00:29:43.146 SGL Metadata Pointer: Not Supported 00:29:43.146 Oversized SGL: Not Supported 00:29:43.146 SGL Metadata Address: Not Supported 00:29:43.146 SGL Offset: Supported 00:29:43.146 Transport SGL Data Block: Not Supported 00:29:43.146 Replay Protected Memory Block: Not Supported 00:29:43.146 00:29:43.146 Firmware Slot Information 00:29:43.146 ========================= 00:29:43.146 Active slot: 1 00:29:43.146 Slot 1 Firmware Revision: 24.09 00:29:43.146 00:29:43.146 00:29:43.146 Commands Supported and Effects 00:29:43.146 ============================== 00:29:43.146 Admin Commands 00:29:43.146 -------------- 00:29:43.146 Get Log Page (02h): Supported 00:29:43.146 Identify (06h): Supported 00:29:43.146 Abort (08h): Supported 00:29:43.146 Set Features (09h): Supported 00:29:43.146 Get Features (0Ah): Supported 00:29:43.146 Asynchronous Event Request (0Ch): Supported 00:29:43.146 Keep Alive (18h): Supported 00:29:43.146 I/O Commands 00:29:43.146 ------------ 00:29:43.146 Flush (00h): Supported LBA-Change 00:29:43.146 Write (01h): Supported LBA-Change 00:29:43.146 Read (02h): Supported 00:29:43.146 Compare (05h): Supported 00:29:43.146 Write Zeroes (08h): Supported LBA-Change 00:29:43.146 Dataset Management (09h): Supported LBA-Change 00:29:43.146 Copy (19h): Supported LBA-Change 00:29:43.146 00:29:43.146 Error Log 00:29:43.146 ========= 00:29:43.146 00:29:43.146 Arbitration 00:29:43.146 =========== 00:29:43.146 Arbitration Burst: 1 00:29:43.146 00:29:43.146 Power Management 00:29:43.146 ================ 00:29:43.146 Number of Power States: 1 00:29:43.146 Current Power State: Power State #0 00:29:43.146 Power State #0: 00:29:43.146 Max Power: 0.00 W 00:29:43.146 Non-Operational State: Operational 00:29:43.146 Entry Latency: Not Reported 00:29:43.146 Exit Latency: Not Reported 00:29:43.146 Relative Read Throughput: 0 00:29:43.146 Relative Read Latency: 0 00:29:43.146 Relative Write Throughput: 0 00:29:43.146 Relative Write Latency: 0 00:29:43.146 Idle Power: Not Reported 00:29:43.146 Active Power: Not Reported 00:29:43.146 Non-Operational Permissive Mode: Not Supported 00:29:43.146 00:29:43.146 Health Information 00:29:43.146 ================== 00:29:43.146 Critical Warnings: 00:29:43.146 Available Spare Space: OK 00:29:43.146 Temperature: OK 00:29:43.146 Device Reliability: OK 00:29:43.146 Read Only: No 00:29:43.146 Volatile Memory Backup: OK 00:29:43.146 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:43.146 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:43.146 Available Spare: 0% 00:29:43.146 Available Spare Threshold: 0% 00:29:43.146 Life Percentage Used:[2024-07-15 12:19:33.067560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.067564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f6eaf0) 00:29:43.146 [2024-07-15 12:19:33.067570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.146 [2024-07-15 12:19:33.067583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdbdc0, cid 7, qid 0 00:29:43.146 [2024-07-15 12:19:33.067672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.146 [2024-07-15 12:19:33.067678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.146 [2024-07-15 12:19:33.067681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.067684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdbdc0) on tqpair=0x1f6eaf0 00:29:43.146 [2024-07-15 12:19:33.067711] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:43.146 [2024-07-15 12:19:33.067721] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb340) on tqpair=0x1f6eaf0 00:29:43.146 [2024-07-15 12:19:33.067726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.146 [2024-07-15 12:19:33.067732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb4c0) on tqpair=0x1f6eaf0 00:29:43.146 [2024-07-15 12:19:33.067736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.146 [2024-07-15 12:19:33.067740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb640) on tqpair=0x1f6eaf0 00:29:43.146 [2024-07-15 12:19:33.067744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.146 [2024-07-15 12:19:33.067749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.146 [2024-07-15 12:19:33.067753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:43.146 [2024-07-15 12:19:33.067759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.067763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.067766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.146 [2024-07-15 12:19:33.067772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.146 [2024-07-15 12:19:33.067783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.146 [2024-07-15 12:19:33.067850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.146 [2024-07-15 12:19:33.067857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.146 [2024-07-15 12:19:33.067860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.067863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.146 [2024-07-15 12:19:33.067869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.067872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.067875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.146 [2024-07-15 12:19:33.067881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.146 [2024-07-15 12:19:33.067893] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.146 [2024-07-15 12:19:33.067973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.146 [2024-07-15 12:19:33.067979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.146 [2024-07-15 12:19:33.067982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.067986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.146 [2024-07-15 12:19:33.067990] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:43.146 [2024-07-15 12:19:33.067993] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:43.146 [2024-07-15 12:19:33.068001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.068005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.068008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.146 [2024-07-15 12:19:33.068014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.146 [2024-07-15 12:19:33.068023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.146 [2024-07-15 12:19:33.068094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.146 [2024-07-15 12:19:33.068101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.146 [2024-07-15 12:19:33.068105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.068111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.146 [2024-07-15 12:19:33.068120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.068123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.068127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.146 [2024-07-15 12:19:33.068133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.146 [2024-07-15 12:19:33.068141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.146 [2024-07-15 12:19:33.068214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.146 [2024-07-15 12:19:33.068219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.146 [2024-07-15 12:19:33.068222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.068234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.146 [2024-07-15 12:19:33.068242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.068245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.146 [2024-07-15 12:19:33.068249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.146 [2024-07-15 12:19:33.068254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.146 [2024-07-15 12:19:33.068264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.068334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.068340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.068343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.068354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.147 [2024-07-15 12:19:33.068366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.147 [2024-07-15 12:19:33.068375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.068453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.068459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.068462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.068473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.147 [2024-07-15 12:19:33.068486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.147 [2024-07-15 12:19:33.068495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.068563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.068569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.068572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.068585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.147 [2024-07-15 12:19:33.068598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.147 [2024-07-15 12:19:33.068607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.068686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.068691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.068694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.068706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.147 [2024-07-15 12:19:33.068718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.147 [2024-07-15 12:19:33.068728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.068802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.068807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.068810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.068822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.147 [2024-07-15 12:19:33.068834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.147 [2024-07-15 12:19:33.068843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.068920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.068925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.068928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.068939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.068946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.147 [2024-07-15 12:19:33.068952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.147 [2024-07-15 12:19:33.068961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.069029] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.069035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.069038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.069041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.069050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.069055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.069058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.147 [2024-07-15 12:19:33.069064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.147 [2024-07-15 12:19:33.069073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.069154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.069160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.069162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.069166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.069174] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.069177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.069180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.147 [2024-07-15 12:19:33.069186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.147 [2024-07-15 12:19:33.069195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.073233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.073242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.073245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.073249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.073258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.073262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.073265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f6eaf0) 00:29:43.147 [2024-07-15 12:19:33.073271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.147 [2024-07-15 12:19:33.073282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fdb7c0, cid 3, qid 0 00:29:43.147 [2024-07-15 12:19:33.073386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:43.147 [2024-07-15 12:19:33.073391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:43.147 [2024-07-15 12:19:33.073394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:43.147 [2024-07-15 12:19:33.073398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fdb7c0) on tqpair=0x1f6eaf0 00:29:43.147 [2024-07-15 12:19:33.073405] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:29:43.147 0% 00:29:43.147 Data Units Read: 0 00:29:43.147 Data Units Written: 0 00:29:43.147 Host Read Commands: 0 00:29:43.147 Host Write Commands: 0 00:29:43.147 Controller Busy Time: 0 minutes 00:29:43.147 Power Cycles: 0 00:29:43.147 Power On Hours: 0 hours 00:29:43.147 Unsafe Shutdowns: 0 00:29:43.147 Unrecoverable Media Errors: 0 00:29:43.147 Lifetime Error Log Entries: 0 00:29:43.147 Warning Temperature Time: 0 minutes 00:29:43.147 Critical Temperature Time: 0 minutes 00:29:43.147 00:29:43.147 Number of Queues 00:29:43.147 ================ 00:29:43.147 Number of I/O Submission Queues: 127 00:29:43.147 Number of I/O Completion Queues: 127 00:29:43.147 00:29:43.147 Active Namespaces 00:29:43.147 ================= 00:29:43.147 Namespace ID:1 00:29:43.147 Error Recovery Timeout: Unlimited 00:29:43.147 Command Set Identifier: NVM (00h) 00:29:43.147 Deallocate: Supported 00:29:43.147 Deallocated/Unwritten Error: Not Supported 00:29:43.147 Deallocated Read Value: Unknown 00:29:43.147 Deallocate in Write Zeroes: Not Supported 00:29:43.147 Deallocated Guard Field: 0xFFFF 00:29:43.147 Flush: Supported 00:29:43.147 Reservation: Supported 00:29:43.147 Namespace Sharing Capabilities: Multiple Controllers 00:29:43.147 Size (in LBAs): 131072 (0GiB) 00:29:43.147 Capacity (in LBAs): 131072 (0GiB) 00:29:43.147 Utilization (in LBAs): 131072 (0GiB) 00:29:43.147 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:43.147 EUI64: ABCDEF0123456789 00:29:43.147 UUID: 2cdb94fc-16d2-4122-90b6-98ea91e8f3ac 00:29:43.147 Thin Provisioning: Not Supported 00:29:43.147 Per-NS Atomic Units: Yes 00:29:43.147 Atomic Boundary Size (Normal): 0 00:29:43.147 Atomic Boundary Size (PFail): 0 00:29:43.147 Atomic Boundary Offset: 0 00:29:43.147 Maximum Single Source Range Length: 65535 00:29:43.147 Maximum Copy Length: 65535 00:29:43.147 Maximum Source Range Count: 1 00:29:43.147 NGUID/EUI64 Never Reused: No 00:29:43.147 Namespace Write Protected: No 00:29:43.147 Number of LBA Formats: 1 00:29:43.147 Current LBA Format: LBA Format #00 00:29:43.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:43.147 00:29:43.147 12:19:33 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:43.147 12:19:33 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.147 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.147 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:43.147 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.147 12:19:33 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:43.147 12:19:33 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:43.148 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:43.148 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:43.148 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:43.148 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:43.148 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:43.148 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:43.148 rmmod nvme_tcp 00:29:43.406 rmmod nvme_fabrics 00:29:43.406 rmmod nvme_keyring 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1275643 ']' 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1275643 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1275643 ']' 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1275643 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1275643 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1275643' 00:29:43.406 killing process with pid 1275643 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1275643 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1275643 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:43.406 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:43.407 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:43.407 12:19:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.407 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:43.407 12:19:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.942 12:19:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:45.942 00:29:45.942 real 0m9.020s 00:29:45.942 user 0m5.173s 00:29:45.942 sys 0m4.783s 00:29:45.942 12:19:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:45.942 12:19:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:45.942 ************************************ 00:29:45.942 END TEST nvmf_identify 00:29:45.942 ************************************ 00:29:45.942 12:19:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:45.942 12:19:35 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:45.942 12:19:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:45.942 12:19:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.942 12:19:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:45.942 ************************************ 00:29:45.942 START TEST nvmf_perf 00:29:45.942 ************************************ 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:45.942 * Looking for test storage... 00:29:45.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:45.942 12:19:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:51.215 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.215 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:51.216 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:51.216 Found net devices under 0000:86:00.0: cvl_0_0 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:51.216 Found net devices under 0000:86:00.1: cvl_0_1 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:51.216 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:51.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:29:51.475 00:29:51.475 --- 10.0.0.2 ping statistics --- 00:29:51.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.475 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:29:51.475 00:29:51.475 --- 10.0.0.1 ping statistics --- 00:29:51.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.475 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1279176 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1279176 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1279176 ']' 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:51.475 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:51.475 [2024-07-15 12:19:41.378151] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:29:51.475 [2024-07-15 12:19:41.378197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.475 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.475 [2024-07-15 12:19:41.451141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.734 [2024-07-15 12:19:41.493342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.734 [2024-07-15 12:19:41.493378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.734 [2024-07-15 12:19:41.493385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.734 [2024-07-15 12:19:41.493391] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.734 [2024-07-15 12:19:41.493396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.734 [2024-07-15 12:19:41.493446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.734 [2024-07-15 12:19:41.493556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.734 [2024-07-15 12:19:41.493660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.734 [2024-07-15 12:19:41.493661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.734 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:51.734 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:29:51.734 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:51.734 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:51.734 12:19:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:51.734 12:19:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.734 12:19:41 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:51.734 12:19:41 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:55.020 12:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:55.020 12:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:55.020 12:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:29:55.020 12:19:44 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:55.278 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:55.278 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:29:55.278 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:55.278 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:55.278 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:55.278 [2024-07-15 12:19:45.207799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.278 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:55.536 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:55.536 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.794 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:55.794 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:55.794 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.051 [2024-07-15 12:19:45.935820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.051 12:19:45 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:56.309 12:19:46 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:29:56.309 12:19:46 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:56.309 12:19:46 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:56.309 12:19:46 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:29:57.684 Initializing NVMe Controllers 00:29:57.684 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:29:57.684 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:29:57.684 Initialization complete. Launching workers. 00:29:57.684 ======================================================== 00:29:57.684 Latency(us) 00:29:57.684 Device Information : IOPS MiB/s Average min max 00:29:57.684 PCIE (0000:5e:00.0) NSID 1 from core 0: 98352.43 384.19 324.84 9.55 4465.12 00:29:57.684 ======================================================== 00:29:57.684 Total : 98352.43 384.19 324.84 9.55 4465.12 00:29:57.684 00:29:57.684 12:19:47 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:57.684 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.062 Initializing NVMe Controllers 00:29:59.062 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:59.062 Initialization complete. Launching workers. 00:29:59.062 ======================================================== 00:29:59.062 Latency(us) 00:29:59.062 Device Information : IOPS MiB/s Average min max 00:29:59.062 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.68 0.36 11254.69 125.70 45687.95 00:29:59.062 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.78 0.24 16581.66 7965.05 53867.57 00:29:59.062 ======================================================== 00:29:59.062 Total : 152.46 0.60 13378.52 125.70 53867.57 00:29:59.062 00:29:59.062 12:19:48 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.062 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.440 Initializing NVMe Controllers 00:30:00.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:00.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:00.440 Initialization complete. Launching workers. 00:30:00.440 ======================================================== 00:30:00.440 Latency(us) 00:30:00.440 Device Information : IOPS MiB/s Average min max 00:30:00.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10850.82 42.39 2948.94 370.31 9574.25 00:30:00.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3812.26 14.89 8471.02 5599.31 47775.36 00:30:00.440 ======================================================== 00:30:00.440 Total : 14663.08 57.28 4384.63 370.31 47775.36 00:30:00.440 00:30:00.440 12:19:50 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:00.440 12:19:50 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:00.440 12:19:50 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.440 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.016 Initializing NVMe Controllers 00:30:03.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.016 Controller IO queue size 128, less than required. 00:30:03.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:03.016 Controller IO queue size 128, less than required. 00:30:03.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:03.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:03.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:03.016 Initialization complete. Launching workers. 00:30:03.016 ======================================================== 00:30:03.016 Latency(us) 00:30:03.016 Device Information : IOPS MiB/s Average min max 00:30:03.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1597.50 399.37 81092.50 55434.70 133779.02 00:30:03.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 632.50 158.12 209950.41 77480.99 336019.15 00:30:03.016 ======================================================== 00:30:03.016 Total : 2230.00 557.50 117640.76 55434.70 336019.15 00:30:03.016 00:30:03.016 12:19:52 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:03.016 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.016 No valid NVMe controllers or AIO or URING devices found 00:30:03.016 Initializing NVMe Controllers 00:30:03.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.016 Controller IO queue size 128, less than required. 00:30:03.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:03.016 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:03.016 Controller IO queue size 128, less than required. 00:30:03.016 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:03.017 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:03.017 WARNING: Some requested NVMe devices were skipped 00:30:03.017 12:19:52 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:03.017 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.550 Initializing NVMe Controllers 00:30:05.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.550 Controller IO queue size 128, less than required. 00:30:05.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:05.550 Controller IO queue size 128, less than required. 00:30:05.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:05.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:05.550 Initialization complete. Launching workers. 00:30:05.550 00:30:05.550 ==================== 00:30:05.550 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:05.550 TCP transport: 00:30:05.550 polls: 19664 00:30:05.550 idle_polls: 12691 00:30:05.550 sock_completions: 6973 00:30:05.550 nvme_completions: 6047 00:30:05.550 submitted_requests: 9028 00:30:05.550 queued_requests: 1 00:30:05.550 00:30:05.550 ==================== 00:30:05.550 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:05.550 TCP transport: 00:30:05.550 polls: 16024 00:30:05.550 idle_polls: 8069 00:30:05.550 sock_completions: 7955 00:30:05.550 nvme_completions: 7047 00:30:05.550 submitted_requests: 10550 00:30:05.550 queued_requests: 1 00:30:05.550 ======================================================== 00:30:05.550 Latency(us) 00:30:05.550 Device Information : IOPS MiB/s Average min max 00:30:05.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1510.66 377.67 86289.39 54034.10 146667.78 00:30:05.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1760.52 440.13 73226.82 26323.99 104993.73 00:30:05.550 ======================================================== 00:30:05.550 Total : 3271.19 817.80 79259.23 26323.99 146667.78 00:30:05.550 00:30:05.550 12:19:55 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:05.550 12:19:55 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.808 12:19:55 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:05.808 12:19:55 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:05.808 12:19:55 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:09.094 12:19:58 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=dfc7222d-99b6-4318-8dd1-809df586f823 00:30:09.094 12:19:58 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb dfc7222d-99b6-4318-8dd1-809df586f823 00:30:09.094 12:19:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=dfc7222d-99b6-4318-8dd1-809df586f823 00:30:09.094 12:19:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:09.095 12:19:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:09.095 12:19:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:09.095 12:19:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:09.353 { 00:30:09.353 "uuid": "dfc7222d-99b6-4318-8dd1-809df586f823", 00:30:09.353 "name": "lvs_0", 00:30:09.353 "base_bdev": "Nvme0n1", 00:30:09.353 "total_data_clusters": 238234, 00:30:09.353 "free_clusters": 238234, 00:30:09.353 "block_size": 512, 00:30:09.353 "cluster_size": 4194304 00:30:09.353 } 00:30:09.353 ]' 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="dfc7222d-99b6-4318-8dd1-809df586f823") .free_clusters' 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="dfc7222d-99b6-4318-8dd1-809df586f823") .cluster_size' 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:09.353 952936 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:09.353 12:19:59 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dfc7222d-99b6-4318-8dd1-809df586f823 lbd_0 20480 00:30:09.610 12:19:59 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=6d0b8ef6-249c-481b-ba26-488dcc51087e 00:30:09.611 12:19:59 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 6d0b8ef6-249c-481b-ba26-488dcc51087e lvs_n_0 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=adf08fb1-69d3-4ad2-a9ab-dc8c8f9ede7a 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb adf08fb1-69d3-4ad2-a9ab-dc8c8f9ede7a 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=adf08fb1-69d3-4ad2-a9ab-dc8c8f9ede7a 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:10.546 { 00:30:10.546 "uuid": "dfc7222d-99b6-4318-8dd1-809df586f823", 00:30:10.546 "name": "lvs_0", 00:30:10.546 "base_bdev": "Nvme0n1", 00:30:10.546 "total_data_clusters": 238234, 00:30:10.546 "free_clusters": 233114, 00:30:10.546 "block_size": 512, 00:30:10.546 "cluster_size": 4194304 00:30:10.546 }, 00:30:10.546 { 00:30:10.546 "uuid": "adf08fb1-69d3-4ad2-a9ab-dc8c8f9ede7a", 00:30:10.546 "name": "lvs_n_0", 00:30:10.546 "base_bdev": "6d0b8ef6-249c-481b-ba26-488dcc51087e", 00:30:10.546 "total_data_clusters": 5114, 00:30:10.546 "free_clusters": 5114, 00:30:10.546 "block_size": 512, 00:30:10.546 "cluster_size": 4194304 00:30:10.546 } 00:30:10.546 ]' 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="adf08fb1-69d3-4ad2-a9ab-dc8c8f9ede7a") .free_clusters' 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="adf08fb1-69d3-4ad2-a9ab-dc8c8f9ede7a") .cluster_size' 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:10.546 20456 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:10.546 12:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u adf08fb1-69d3-4ad2-a9ab-dc8c8f9ede7a lbd_nest_0 20456 00:30:10.805 12:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=ac27d841-1a04-4786-87ed-95f35de2a0cd 00:30:10.805 12:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.064 12:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:11.064 12:20:00 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ac27d841-1a04-4786-87ed-95f35de2a0cd 00:30:11.064 12:20:01 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.322 12:20:01 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:11.322 12:20:01 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:11.322 12:20:01 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:11.323 12:20:01 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:11.323 12:20:01 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:11.323 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.523 Initializing NVMe Controllers 00:30:23.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.524 Initialization complete. Launching workers. 00:30:23.524 ======================================================== 00:30:23.524 Latency(us) 00:30:23.524 Device Information : IOPS MiB/s Average min max 00:30:23.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.60 0.02 20208.27 145.64 45677.83 00:30:23.524 ======================================================== 00:30:23.524 Total : 49.60 0.02 20208.27 145.64 45677.83 00:30:23.524 00:30:23.524 12:20:11 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:23.524 12:20:11 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.524 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.502 Initializing NVMe Controllers 00:30:33.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:33.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:33.502 Initialization complete. Launching workers. 00:30:33.502 ======================================================== 00:30:33.502 Latency(us) 00:30:33.502 Device Information : IOPS MiB/s Average min max 00:30:33.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.60 9.70 12894.59 7031.29 48816.02 00:30:33.502 ======================================================== 00:30:33.502 Total : 77.60 9.70 12894.59 7031.29 48816.02 00:30:33.502 00:30:33.502 12:20:22 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:33.502 12:20:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:33.502 12:20:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:33.502 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.608 Initializing NVMe Controllers 00:30:43.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:43.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:43.608 Initialization complete. Launching workers. 00:30:43.608 ======================================================== 00:30:43.608 Latency(us) 00:30:43.608 Device Information : IOPS MiB/s Average min max 00:30:43.608 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8645.14 4.22 3701.14 231.75 8781.69 00:30:43.608 ======================================================== 00:30:43.608 Total : 8645.14 4.22 3701.14 231.75 8781.69 00:30:43.608 00:30:43.608 12:20:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:43.608 12:20:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:43.608 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.607 Initializing NVMe Controllers 00:30:53.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:53.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:53.607 Initialization complete. Launching workers. 00:30:53.607 ======================================================== 00:30:53.607 Latency(us) 00:30:53.607 Device Information : IOPS MiB/s Average min max 00:30:53.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3234.17 404.27 9897.71 575.00 22661.47 00:30:53.607 ======================================================== 00:30:53.607 Total : 3234.17 404.27 9897.71 575.00 22661.47 00:30:53.607 00:30:53.607 12:20:42 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:53.607 12:20:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:53.607 12:20:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:53.607 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.584 Initializing NVMe Controllers 00:31:03.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.584 Controller IO queue size 128, less than required. 00:31:03.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:03.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:03.584 Initialization complete. Launching workers. 00:31:03.584 ======================================================== 00:31:03.584 Latency(us) 00:31:03.584 Device Information : IOPS MiB/s Average min max 00:31:03.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15878.50 7.75 8064.89 1355.51 22780.56 00:31:03.584 ======================================================== 00:31:03.584 Total : 15878.50 7.75 8064.89 1355.51 22780.56 00:31:03.584 00:31:03.584 12:20:52 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:03.584 12:20:52 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:03.584 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.556 Initializing NVMe Controllers 00:31:13.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:13.557 Controller IO queue size 128, less than required. 00:31:13.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:13.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:13.557 Initialization complete. Launching workers. 00:31:13.557 ======================================================== 00:31:13.557 Latency(us) 00:31:13.557 Device Information : IOPS MiB/s Average min max 00:31:13.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.77 150.60 106580.08 16382.52 227316.00 00:31:13.557 ======================================================== 00:31:13.557 Total : 1204.77 150.60 106580.08 16382.52 227316.00 00:31:13.557 00:31:13.557 12:21:03 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.557 12:21:03 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac27d841-1a04-4786-87ed-95f35de2a0cd 00:31:14.493 12:21:04 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:14.493 12:21:04 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6d0b8ef6-249c-481b-ba26-488dcc51087e 00:31:14.751 12:21:04 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:15.010 rmmod nvme_tcp 00:31:15.010 rmmod nvme_fabrics 00:31:15.010 rmmod nvme_keyring 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1279176 ']' 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1279176 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1279176 ']' 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1279176 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1279176 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1279176' 00:31:15.010 killing process with pid 1279176 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1279176 00:31:15.010 12:21:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1279176 00:31:16.401 12:21:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:16.401 12:21:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:16.401 12:21:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:16.401 12:21:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:16.401 12:21:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:16.401 12:21:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.401 12:21:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:16.401 12:21:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.990 12:21:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:18.990 00:31:18.990 real 1m32.909s 00:31:18.990 user 5m32.901s 00:31:18.990 sys 0m15.780s 00:31:18.990 12:21:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:18.990 12:21:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:18.990 ************************************ 00:31:18.990 END TEST nvmf_perf 00:31:18.990 ************************************ 00:31:18.990 12:21:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:18.990 12:21:08 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:18.990 12:21:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:18.990 12:21:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:18.990 12:21:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:18.990 ************************************ 00:31:18.990 START TEST nvmf_fio_host 00:31:18.990 ************************************ 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:18.990 * Looking for test storage... 00:31:18.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.990 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:18.991 12:21:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:24.258 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:24.258 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:24.258 Found net devices under 0000:86:00.0: cvl_0_0 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:24.258 Found net devices under 0000:86:00.1: cvl_0_1 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:24.258 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:24.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:31:24.517 00:31:24.517 --- 10.0.0.2 ping statistics --- 00:31:24.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.517 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:31:24.517 00:31:24.517 --- 10.0.0.1 ping statistics --- 00:31:24.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.517 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1296769 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1296769 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1296769 ']' 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.517 12:21:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.517 [2024-07-15 12:21:14.449383] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:31:24.517 [2024-07-15 12:21:14.449427] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.517 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.776 [2024-07-15 12:21:14.522857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:24.776 [2024-07-15 12:21:14.563395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.776 [2024-07-15 12:21:14.563438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.776 [2024-07-15 12:21:14.563445] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.776 [2024-07-15 12:21:14.563451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.776 [2024-07-15 12:21:14.563455] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.776 [2024-07-15 12:21:14.563530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.776 [2024-07-15 12:21:14.563637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:24.776 [2024-07-15 12:21:14.563724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:24.776 [2024-07-15 12:21:14.563723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.345 12:21:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:25.345 12:21:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:31:25.345 12:21:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:25.604 [2024-07-15 12:21:15.410070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.604 12:21:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:25.604 12:21:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.604 12:21:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.604 12:21:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:25.862 Malloc1 00:31:25.862 12:21:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:25.862 12:21:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:26.122 12:21:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.380 [2024-07-15 12:21:16.188414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.380 12:21:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:26.663 12:21:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:26.922 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:26.922 fio-3.35 00:31:26.922 Starting 1 thread 00:31:26.922 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.439 00:31:29.440 test: (groupid=0, jobs=1): err= 0: pid=1297186: Mon Jul 15 12:21:19 2024 00:31:29.440 read: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(91.9MiB/2006msec) 00:31:29.440 slat (nsec): min=1612, max=254863, avg=1762.60, stdev=2294.70 00:31:29.440 clat (usec): min=2844, max=10064, avg=6045.27, stdev=450.10 00:31:29.440 lat (usec): min=2879, max=10065, avg=6047.04, stdev=450.05 00:31:29.440 clat percentiles (usec): 00:31:29.440 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669], 00:31:29.440 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:31:29.440 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:31:29.440 | 99.00th=[ 7046], 99.50th=[ 7111], 99.90th=[ 7832], 99.95th=[ 9241], 00:31:29.440 | 99.99th=[10028] 00:31:29.440 bw ( KiB/s): min=46048, max=47536, per=100.00%, avg=46918.00, stdev=657.67, samples=4 00:31:29.440 iops : min=11512, max=11884, avg=11729.50, stdev=164.42, samples=4 00:31:29.440 write: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(91.3MiB/2006msec); 0 zone resets 00:31:29.440 slat (nsec): min=1662, max=228611, avg=1838.99, stdev=1681.41 00:31:29.440 clat (usec): min=2468, max=9329, avg=4874.09, stdev=388.19 00:31:29.440 lat (usec): min=2483, max=9331, avg=4875.93, stdev=388.20 00:31:29.440 clat percentiles (usec): 00:31:29.440 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:31:29.440 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 4948], 00:31:29.440 | 70.00th=[ 5080], 80.00th=[ 5145], 90.00th=[ 5342], 95.00th=[ 5473], 00:31:29.440 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 7832], 99.95th=[ 8455], 00:31:29.440 | 99.99th=[ 9241] 00:31:29.440 bw ( KiB/s): min=46152, max=47040, per=100.00%, avg=46604.00, stdev=363.01, samples=4 00:31:29.440 iops : min=11538, max=11760, avg=11651.00, stdev=90.75, samples=4 00:31:29.440 lat (msec) : 4=0.66%, 10=99.33%, 20=0.01% 00:31:29.440 cpu : usr=71.47%, sys=25.94%, ctx=81, majf=0, minf=6 00:31:29.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:29.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.440 issued rwts: total=23523,23365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.440 00:31:29.440 Run status group 0 (all jobs): 00:31:29.440 READ: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=91.9MiB (96.3MB), run=2006-2006msec 00:31:29.440 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.3MiB (95.7MB), run=2006-2006msec 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:29.440 12:21:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:29.440 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:29.440 fio-3.35 00:31:29.440 Starting 1 thread 00:31:29.440 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.991 00:31:31.991 test: (groupid=0, jobs=1): err= 0: pid=1297735: Mon Jul 15 12:21:21 2024 00:31:31.991 read: IOPS=10.5k, BW=165MiB/s (173MB/s)(331MiB/2009msec) 00:31:31.991 slat (nsec): min=2612, max=94144, avg=2928.79, stdev=1580.91 00:31:31.991 clat (usec): min=1641, max=14028, avg=7047.19, stdev=1803.45 00:31:31.991 lat (usec): min=1644, max=14031, avg=7050.11, stdev=1803.66 00:31:31.991 clat percentiles (usec): 00:31:31.991 | 1.00th=[ 3654], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5473], 00:31:31.991 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6915], 60.00th=[ 7439], 00:31:31.991 | 70.00th=[ 7898], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[10028], 00:31:31.991 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13304], 99.95th=[13698], 00:31:31.991 | 99.99th=[13829] 00:31:31.991 bw ( KiB/s): min=76448, max=93728, per=50.68%, avg=85408.00, stdev=7081.62, samples=4 00:31:31.991 iops : min= 4778, max= 5858, avg=5338.00, stdev=442.60, samples=4 00:31:31.991 write: IOPS=6211, BW=97.1MiB/s (102MB/s)(174MiB/1797msec); 0 zone resets 00:31:31.991 slat (usec): min=30, max=388, avg=32.88, stdev=10.44 00:31:31.991 clat (usec): min=3220, max=14911, avg=8645.51, stdev=1599.70 00:31:31.991 lat (usec): min=3250, max=15022, avg=8678.39, stdev=1602.68 00:31:31.991 clat percentiles (usec): 00:31:31.991 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:31:31.991 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:31:31.991 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10945], 95.00th=[11469], 00:31:31.991 | 99.00th=[12911], 99.50th=[13829], 99.90th=[14353], 99.95th=[14746], 00:31:31.991 | 99.99th=[14877] 00:31:31.991 bw ( KiB/s): min=80480, max=97760, per=89.56%, avg=89008.00, stdev=7099.12, samples=4 00:31:31.991 iops : min= 5030, max= 6110, avg=5563.00, stdev=443.70, samples=4 00:31:31.991 lat (msec) : 2=0.02%, 4=1.55%, 10=88.01%, 20=10.42% 00:31:31.991 cpu : usr=83.37%, sys=14.19%, ctx=82, majf=0, minf=3 00:31:31.991 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:31:31.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:31.991 issued rwts: total=21162,11162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.991 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:31.991 00:31:31.991 Run status group 0 (all jobs): 00:31:31.991 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=331MiB (347MB), run=2009-2009msec 00:31:31.991 WRITE: bw=97.1MiB/s (102MB/s), 97.1MiB/s-97.1MiB/s (102MB/s-102MB/s), io=174MiB (183MB), run=1797-1797msec 00:31:31.991 12:21:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.249 12:21:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:32.249 12:21:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:32.249 12:21:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:32.249 12:21:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:32.250 12:21:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:31:32.250 12:21:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:32.250 12:21:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:32.250 12:21:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:32.250 12:21:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:32.250 12:21:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:31:32.250 12:21:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:35.533 Nvme0n1 00:31:35.533 12:21:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:38.061 12:21:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=597c9d18-e163-4b09-8275-1029fc2290df 00:31:38.061 12:21:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 597c9d18-e163-4b09-8275-1029fc2290df 00:31:38.061 12:21:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=597c9d18-e163-4b09-8275-1029fc2290df 00:31:38.061 12:21:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:38.061 12:21:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:38.061 12:21:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:38.061 12:21:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:38.319 12:21:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:38.319 { 00:31:38.319 "uuid": "597c9d18-e163-4b09-8275-1029fc2290df", 00:31:38.319 "name": "lvs_0", 00:31:38.319 "base_bdev": "Nvme0n1", 00:31:38.319 "total_data_clusters": 930, 00:31:38.319 "free_clusters": 930, 00:31:38.319 "block_size": 512, 00:31:38.319 "cluster_size": 1073741824 00:31:38.319 } 00:31:38.319 ]' 00:31:38.319 12:21:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="597c9d18-e163-4b09-8275-1029fc2290df") .free_clusters' 00:31:38.319 12:21:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:38.319 12:21:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="597c9d18-e163-4b09-8275-1029fc2290df") .cluster_size' 00:31:38.319 12:21:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:38.319 12:21:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:38.319 12:21:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:38.319 952320 00:31:38.319 12:21:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:38.577 5d7086be-a048-49d1-924e-3e3753b881e4 00:31:38.577 12:21:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:38.834 12:21:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:39.091 12:21:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:39.447 12:21:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:39.749 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:39.749 fio-3.35 00:31:39.749 Starting 1 thread 00:31:39.749 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.279 00:31:42.279 test: (groupid=0, jobs=1): err= 0: pid=1299484: Mon Jul 15 12:21:31 2024 00:31:42.279 read: IOPS=7966, BW=31.1MiB/s (32.6MB/s)(62.5MiB/2007msec) 00:31:42.279 slat (nsec): min=1608, max=91969, avg=1708.25, stdev=1055.64 00:31:42.279 clat (usec): min=656, max=170107, avg=8848.71, stdev=10332.40 00:31:42.279 lat (usec): min=658, max=170126, avg=8850.42, stdev=10332.55 00:31:42.279 clat percentiles (msec): 00:31:42.279 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:31:42.279 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:31:42.279 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:31:42.279 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 171], 99.95th=[ 171], 00:31:42.279 | 99.99th=[ 171] 00:31:42.279 bw ( KiB/s): min=22512, max=35000, per=99.93%, avg=31842.00, stdev=6220.15, samples=4 00:31:42.279 iops : min= 5628, max= 8750, avg=7960.50, stdev=1555.04, samples=4 00:31:42.279 write: IOPS=7938, BW=31.0MiB/s (32.5MB/s)(62.2MiB/2007msec); 0 zone resets 00:31:42.279 slat (nsec): min=1660, max=93248, avg=1784.69, stdev=848.66 00:31:42.279 clat (usec): min=217, max=168544, avg=7160.41, stdev=9659.43 00:31:42.279 lat (usec): min=219, max=168549, avg=7162.19, stdev=9659.62 00:31:42.279 clat percentiles (msec): 00:31:42.279 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:31:42.279 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:42.279 | 70.00th=[ 7], 80.00th=[ 8], 90.00th=[ 8], 95.00th=[ 8], 00:31:42.279 | 99.00th=[ 9], 99.50th=[ 11], 99.90th=[ 169], 99.95th=[ 169], 00:31:42.279 | 99.99th=[ 169] 00:31:42.279 bw ( KiB/s): min=23400, max=34592, per=99.97%, avg=31746.00, stdev=5564.18, samples=4 00:31:42.279 iops : min= 5850, max= 8648, avg=7936.50, stdev=1391.05, samples=4 00:31:42.279 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:42.279 lat (msec) : 2=0.04%, 4=0.22%, 10=98.86%, 20=0.45%, 250=0.40% 00:31:42.279 cpu : usr=69.54%, sys=28.56%, ctx=77, majf=0, minf=6 00:31:42.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:42.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.279 issued rwts: total=15988,15933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.279 00:31:42.279 Run status group 0 (all jobs): 00:31:42.279 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=62.5MiB (65.5MB), run=2007-2007msec 00:31:42.279 WRITE: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=62.2MiB (65.3MB), run=2007-2007msec 00:31:42.279 12:21:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:42.279 12:21:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:43.215 12:21:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=027168a7-8d74-4719-8e4f-b977aa8546e3 00:31:43.215 12:21:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 027168a7-8d74-4719-8e4f-b977aa8546e3 00:31:43.215 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=027168a7-8d74-4719-8e4f-b977aa8546e3 00:31:43.215 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:43.215 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:43.215 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:43.215 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:43.472 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:43.472 { 00:31:43.472 "uuid": "597c9d18-e163-4b09-8275-1029fc2290df", 00:31:43.472 "name": "lvs_0", 00:31:43.472 "base_bdev": "Nvme0n1", 00:31:43.472 "total_data_clusters": 930, 00:31:43.472 "free_clusters": 0, 00:31:43.472 "block_size": 512, 00:31:43.472 "cluster_size": 1073741824 00:31:43.472 }, 00:31:43.472 { 00:31:43.472 "uuid": "027168a7-8d74-4719-8e4f-b977aa8546e3", 00:31:43.472 "name": "lvs_n_0", 00:31:43.472 "base_bdev": "5d7086be-a048-49d1-924e-3e3753b881e4", 00:31:43.472 "total_data_clusters": 237847, 00:31:43.472 "free_clusters": 237847, 00:31:43.472 "block_size": 512, 00:31:43.472 "cluster_size": 4194304 00:31:43.472 } 00:31:43.472 ]' 00:31:43.472 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="027168a7-8d74-4719-8e4f-b977aa8546e3") .free_clusters' 00:31:43.472 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:43.472 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="027168a7-8d74-4719-8e4f-b977aa8546e3") .cluster_size' 00:31:43.472 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:43.472 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:43.472 12:21:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:43.472 951388 00:31:43.472 12:21:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:44.039 bb667ce7-79e1-431a-bf9f-e7a47bfcc321 00:31:44.039 12:21:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:44.296 12:21:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.554 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:44.820 12:21:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.076 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:45.076 fio-3.35 00:31:45.076 Starting 1 thread 00:31:45.076 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.596 00:31:47.596 test: (groupid=0, jobs=1): err= 0: pid=1300516: Mon Jul 15 12:21:37 2024 00:31:47.596 read: IOPS=7757, BW=30.3MiB/s (31.8MB/s)(60.8MiB/2006msec) 00:31:47.596 slat (nsec): min=1607, max=117391, avg=1703.84, stdev=1191.38 00:31:47.596 clat (usec): min=3104, max=15096, avg=9086.26, stdev=804.93 00:31:47.596 lat (usec): min=3108, max=15098, avg=9087.97, stdev=804.87 00:31:47.596 clat percentiles (usec): 00:31:47.596 | 1.00th=[ 7242], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:31:47.596 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:31:47.596 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:31:47.596 | 99.00th=[10814], 99.50th=[11076], 99.90th=[13566], 99.95th=[14484], 00:31:47.596 | 99.99th=[15008] 00:31:47.596 bw ( KiB/s): min=29752, max=31560, per=99.80%, avg=30968.00, stdev=844.39, samples=4 00:31:47.596 iops : min= 7438, max= 7890, avg=7742.00, stdev=211.10, samples=4 00:31:47.596 write: IOPS=7742, BW=30.2MiB/s (31.7MB/s)(60.7MiB/2006msec); 0 zone resets 00:31:47.596 slat (nsec): min=1660, max=89470, avg=1781.32, stdev=825.62 00:31:47.596 clat (usec): min=1430, max=13457, avg=7330.76, stdev=657.05 00:31:47.596 lat (usec): min=1435, max=13459, avg=7332.54, stdev=657.02 00:31:47.596 clat percentiles (usec): 00:31:47.596 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6849], 00:31:47.596 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:31:47.596 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8356], 00:31:47.596 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[10945], 99.95th=[12518], 00:31:47.596 | 99.99th=[12780] 00:31:47.596 bw ( KiB/s): min=30848, max=31168, per=99.98%, avg=30964.00, stdev=140.32, samples=4 00:31:47.596 iops : min= 7712, max= 7792, avg=7741.00, stdev=35.08, samples=4 00:31:47.596 lat (msec) : 2=0.01%, 4=0.07%, 10=94.37%, 20=5.55% 00:31:47.596 cpu : usr=70.62%, sys=27.68%, ctx=90, majf=0, minf=6 00:31:47.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:47.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:47.596 issued rwts: total=15562,15531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:47.596 00:31:47.596 Run status group 0 (all jobs): 00:31:47.596 READ: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=60.8MiB (63.7MB), run=2006-2006msec 00:31:47.596 WRITE: bw=30.2MiB/s (31.7MB/s), 30.2MiB/s-30.2MiB/s (31.7MB/s-31.7MB/s), io=60.7MiB (63.6MB), run=2006-2006msec 00:31:47.596 12:21:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:47.596 12:21:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:47.596 12:21:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:51.783 12:21:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:51.783 12:21:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:54.301 12:21:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:54.557 12:21:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:56.451 rmmod nvme_tcp 00:31:56.451 rmmod nvme_fabrics 00:31:56.451 rmmod nvme_keyring 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1296769 ']' 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1296769 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1296769 ']' 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1296769 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1296769 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1296769' 00:31:56.451 killing process with pid 1296769 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1296769 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1296769 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:56.451 12:21:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.009 12:21:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:59.009 00:31:59.009 real 0m39.953s 00:31:59.009 user 2m39.165s 00:31:59.009 sys 0m8.840s 00:31:59.009 12:21:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:59.009 12:21:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.009 ************************************ 00:31:59.009 END TEST nvmf_fio_host 00:31:59.009 ************************************ 00:31:59.009 12:21:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:59.009 12:21:48 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:59.009 12:21:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:59.009 12:21:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.009 12:21:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.009 ************************************ 00:31:59.009 START TEST nvmf_failover 00:31:59.009 ************************************ 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:59.009 * Looking for test storage... 00:31:59.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.009 12:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:31:59.010 12:21:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:04.329 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:04.329 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:04.329 Found net devices under 0000:86:00.0: cvl_0_0 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:04.329 Found net devices under 0000:86:00.1: cvl_0_1 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:04.329 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:04.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:32:04.588 00:32:04.588 --- 10.0.0.2 ping statistics --- 00:32:04.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.588 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:32:04.588 00:32:04.588 --- 10.0.0.1 ping statistics --- 00:32:04.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.588 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1305659 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1305659 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1305659 ']' 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:04.588 12:21:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:04.588 [2024-07-15 12:21:54.507573] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:32:04.588 [2024-07-15 12:21:54.507618] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.588 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.588 [2024-07-15 12:21:54.580019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:04.846 [2024-07-15 12:21:54.620924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:04.846 [2024-07-15 12:21:54.620965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:04.846 [2024-07-15 12:21:54.620972] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:04.846 [2024-07-15 12:21:54.620978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:04.846 [2024-07-15 12:21:54.620983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:04.846 [2024-07-15 12:21:54.621105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:04.846 [2024-07-15 12:21:54.621216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.846 [2024-07-15 12:21:54.621218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:05.410 12:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:05.410 12:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:05.410 12:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:05.410 12:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:05.410 12:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:05.410 12:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.410 12:21:55 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:05.666 [2024-07-15 12:21:55.504018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.666 12:21:55 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:05.923 Malloc0 00:32:05.923 12:21:55 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:05.923 12:21:55 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:06.180 12:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.437 [2024-07-15 12:21:56.256489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.437 12:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:06.437 [2024-07-15 12:21:56.432962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:06.694 [2024-07-15 12:21:56.613565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1306121 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1306121 /var/tmp/bdevperf.sock 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1306121 ']' 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:06.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:06.694 12:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:06.951 12:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:06.951 12:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:06.951 12:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:07.207 NVMe0n1 00:32:07.464 12:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:07.720 00:32:07.720 12:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1306307 00:32:07.720 12:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:07.720 12:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:08.649 12:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.905 [2024-07-15 12:21:58.787600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.905 [2024-07-15 12:21:58.787713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 [2024-07-15 12:21:58.787881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129b270 is same with the state(5) to be set 00:32:08.906 12:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:12.175 12:22:01 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:12.431 00:32:12.431 12:22:02 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:12.431 [2024-07-15 12:22:02.391452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.431 [2024-07-15 12:22:02.391670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 [2024-07-15 12:22:02.391751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129c630 is same with the state(5) to be set 00:32:12.432 12:22:02 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:15.704 12:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:15.704 [2024-07-15 12:22:05.584342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.704 12:22:05 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:16.634 12:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:16.892 [2024-07-15 12:22:06.785644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 [2024-07-15 12:22:06.785943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cd10 is same with the state(5) to be set 00:32:16.892 12:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1306307 00:32:23.457 0 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1306121 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1306121 ']' 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1306121 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1306121 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1306121' 00:32:23.457 killing process with pid 1306121 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1306121 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1306121 00:32:23.457 12:22:12 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:23.457 [2024-07-15 12:21:56.668769] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:32:23.457 [2024-07-15 12:21:56.668821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306121 ] 00:32:23.457 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.457 [2024-07-15 12:21:56.736937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.457 [2024-07-15 12:21:56.777333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.457 Running I/O for 15 seconds... 00:32:23.457 [2024-07-15 12:21:58.788967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.457 [2024-07-15 12:21:58.789219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.457 [2024-07-15 12:21:58.789234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.458 [2024-07-15 12:21:58.789737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.458 [2024-07-15 12:21:58.789854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.458 [2024-07-15 12:21:58.789862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.789869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.789876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.789883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.789890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.789896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.789905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.789911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.789919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.789927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.789935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.789942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.789949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.789956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.789964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.789971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.789979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.789985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.789993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.459 [2024-07-15 12:21:58.790447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.459 [2024-07-15 12:21:58.790487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94632 len:8 PRP1 0x0 PRP2 0x0 00:32:23.459 [2024-07-15 12:21:58.790493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.459 [2024-07-15 12:21:58.790502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.459 [2024-07-15 12:21:58.790507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.459 [2024-07-15 12:21:58.790514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94640 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94648 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94656 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94664 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94672 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94680 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94688 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94696 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94704 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94712 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94720 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94728 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94736 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94744 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94752 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94760 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94768 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94776 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94784 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94792 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.790978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.790982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.790988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94800 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.790994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.791001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.791006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.791011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94808 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.791017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.791024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.791029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.791034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94816 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.791040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.800393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.800407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.800418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94824 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.800433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.800443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.800449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.460 [2024-07-15 12:21:58.800457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94832 len:8 PRP1 0x0 PRP2 0x0 00:32:23.460 [2024-07-15 12:21:58.800468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.460 [2024-07-15 12:21:58.800480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.460 [2024-07-15 12:21:58.800487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.461 [2024-07-15 12:21:58.800495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94840 len:8 PRP1 0x0 PRP2 0x0 00:32:23.461 [2024-07-15 12:21:58.800507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:21:58.800519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.461 [2024-07-15 12:21:58.800529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.461 [2024-07-15 12:21:58.800537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94848 len:8 PRP1 0x0 PRP2 0x0 00:32:23.461 [2024-07-15 12:21:58.800549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:21:58.800561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.461 [2024-07-15 12:21:58.800569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.461 [2024-07-15 12:21:58.800580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94856 len:8 PRP1 0x0 PRP2 0x0 00:32:23.461 [2024-07-15 12:21:58.800592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:21:58.800602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.461 [2024-07-15 12:21:58.800610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.461 [2024-07-15 12:21:58.800617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94864 len:8 PRP1 0x0 PRP2 0x0 00:32:23.461 [2024-07-15 12:21:58.800626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:21:58.800637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.461 [2024-07-15 12:21:58.800645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.461 [2024-07-15 12:21:58.800653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94872 len:8 PRP1 0x0 PRP2 0x0 00:32:23.461 [2024-07-15 12:21:58.800661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:21:58.800707] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239d5c0 was disconnected and freed. reset controller. 00:32:23.461 [2024-07-15 12:21:58.800722] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:23.461 [2024-07-15 12:21:58.800750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.461 [2024-07-15 12:21:58.800760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:21:58.800771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.461 [2024-07-15 12:21:58.800782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:21:58.800792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.461 [2024-07-15 12:21:58.800801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:21:58.800811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.461 [2024-07-15 12:21:58.800821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:21:58.800831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.461 [2024-07-15 12:21:58.800873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2376fd0 (9): Bad file descriptor 00:32:23.461 [2024-07-15 12:21:58.804742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.461 [2024-07-15 12:21:58.875550] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:23.461 [2024-07-15 12:22:02.393960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.461 [2024-07-15 12:22:02.393996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.461 [2024-07-15 12:22:02.394291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.461 [2024-07-15 12:22:02.394297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.462 [2024-07-15 12:22:02.394812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.462 [2024-07-15 12:22:02.394826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.462 [2024-07-15 12:22:02.394914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.462 [2024-07-15 12:22:02.394922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.394928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.394936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.394943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.394950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.394962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.394970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.394977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.394985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.394991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.394999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.395005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.395019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.395034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.395048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.395064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.395079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.395094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.463 [2024-07-15 12:22:02.395108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34464 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.463 [2024-07-15 12:22:02.395183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.463 [2024-07-15 12:22:02.395197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.463 [2024-07-15 12:22:02.395211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.463 [2024-07-15 12:22:02.395229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2376fd0 is same with the state(5) to be set 00:32:23.463 [2024-07-15 12:22:02.395370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34472 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34480 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34488 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34496 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34504 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34512 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34520 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34528 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33832 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33840 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33848 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33856 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33864 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.463 [2024-07-15 12:22:02.395686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.463 [2024-07-15 12:22:02.395691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.463 [2024-07-15 12:22:02.395697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33872 len:8 PRP1 0x0 PRP2 0x0 00:32:23.463 [2024-07-15 12:22:02.395703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33880 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34536 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34544 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34552 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34560 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34568 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34576 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34584 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34592 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34600 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34608 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.395978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.395986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34616 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.395992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.395999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34624 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34632 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34640 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34648 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34656 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34664 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34672 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34680 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34688 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34696 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34704 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.396255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.396261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.396266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.396271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34712 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.406457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.464 [2024-07-15 12:22:02.406471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.464 [2024-07-15 12:22:02.406479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.464 [2024-07-15 12:22:02.406487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34720 len:8 PRP1 0x0 PRP2 0x0 00:32:23.464 [2024-07-15 12:22:02.406495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34728 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34736 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34744 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34752 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34760 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34768 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34776 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34784 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34792 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34800 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34808 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34816 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34824 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33808 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.406971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33888 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.406980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.406989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.406996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.407003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33896 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.407012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.407021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.407027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.407034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33904 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.407044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.407053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.407061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.407068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33912 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.407077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.407086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.407093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.407100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33920 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.407108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.407117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.407124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.407131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33928 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.407139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.407148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.407155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.407162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33936 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.407171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.407180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.407186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.465 [2024-07-15 12:22:02.407193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33944 len:8 PRP1 0x0 PRP2 0x0 00:32:23.465 [2024-07-15 12:22:02.407202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.465 [2024-07-15 12:22:02.407211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.465 [2024-07-15 12:22:02.407218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33952 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33960 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33968 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33976 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33984 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33992 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34000 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34008 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34016 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34024 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34032 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34040 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34048 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34056 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34064 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34072 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34080 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34088 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34096 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34104 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34112 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34120 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34128 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.466 [2024-07-15 12:22:02.407963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34136 len:8 PRP1 0x0 PRP2 0x0 00:32:23.466 [2024-07-15 12:22:02.407971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.466 [2024-07-15 12:22:02.407980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.466 [2024-07-15 12:22:02.407987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.407994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34144 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34152 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34160 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34168 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34176 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34184 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34192 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34200 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34208 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34216 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34224 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34232 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34248 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34256 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34264 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34272 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34280 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34288 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34296 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34304 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33816 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.408669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.408678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.408684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.408691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33824 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.415139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.415152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.415160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.415167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34312 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.415175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.415184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.415191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.467 [2024-07-15 12:22:02.415199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34320 len:8 PRP1 0x0 PRP2 0x0 00:32:23.467 [2024-07-15 12:22:02.415207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.467 [2024-07-15 12:22:02.415216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.467 [2024-07-15 12:22:02.415229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34328 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34336 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34344 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34352 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34360 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34368 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34376 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34384 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34392 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34400 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34408 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34416 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34424 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34432 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34440 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34448 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34456 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.468 [2024-07-15 12:22:02.415769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.468 [2024-07-15 12:22:02.415776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34464 len:8 PRP1 0x0 PRP2 0x0 00:32:23.468 [2024-07-15 12:22:02.415785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:02.415831] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2541e00 was disconnected and freed. reset controller. 00:32:23.468 [2024-07-15 12:22:02.415842] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:23.468 [2024-07-15 12:22:02.415852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.468 [2024-07-15 12:22:02.415888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2376fd0 (9): Bad file descriptor 00:32:23.468 [2024-07-15 12:22:02.420900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.468 [2024-07-15 12:22:02.490353] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:23.468 [2024-07-15 12:22:06.786875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.468 [2024-07-15 12:22:06.786909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:06.786924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.468 [2024-07-15 12:22:06.786933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:06.786942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.468 [2024-07-15 12:22:06.786949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:06.786957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.468 [2024-07-15 12:22:06.786964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:06.786972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.468 [2024-07-15 12:22:06.786979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:06.786987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.468 [2024-07-15 12:22:06.786994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:06.787006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.468 [2024-07-15 12:22:06.787013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:06.787022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.468 [2024-07-15 12:22:06.787029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.468 [2024-07-15 12:22:06.787037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.468 [2024-07-15 12:22:06.787044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.469 [2024-07-15 12:22:06.787146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:23.469 [2024-07-15 12:22:06.787609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.469 [2024-07-15 12:22:06.787623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.469 [2024-07-15 12:22:06.787638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.469 [2024-07-15 12:22:06.787653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.469 [2024-07-15 12:22:06.787667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.469 [2024-07-15 12:22:06.787675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.469 [2024-07-15 12:22:06.787682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.787991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.787998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.470 [2024-07-15 12:22:06.788216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.470 [2024-07-15 12:22:06.788223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.471 [2024-07-15 12:22:06.788241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.471 [2024-07-15 12:22:06.788255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.471 [2024-07-15 12:22:06.788269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.471 [2024-07-15 12:22:06.788284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.471 [2024-07-15 12:22:06.788297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.471 [2024-07-15 12:22:06.788313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43968 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43976 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43984 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43992 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44000 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44008 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44016 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44024 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44032 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44040 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44048 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44056 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44064 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44072 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44080 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44088 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44096 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44104 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44112 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44120 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44128 len:8 PRP1 0x0 PRP2 0x0 00:32:23.471 [2024-07-15 12:22:06.788838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.471 [2024-07-15 12:22:06.788845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.471 [2024-07-15 12:22:06.788850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.471 [2024-07-15 12:22:06.788855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44136 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.788861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.788867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.788872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.788878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44144 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.788884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.788892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.788897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44152 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.799412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44160 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.799436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44168 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.799460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44176 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.799483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44184 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.799506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44192 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.799529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44200 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.799552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44208 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.799577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43560 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:23.472 [2024-07-15 12:22:06.799600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:23.472 [2024-07-15 12:22:06.799605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43568 len:8 PRP1 0x0 PRP2 0x0 00:32:23.472 [2024-07-15 12:22:06.799611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799652] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2541bf0 was disconnected and freed. reset controller. 00:32:23.472 [2024-07-15 12:22:06.799661] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:23.472 [2024-07-15 12:22:06.799681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.472 [2024-07-15 12:22:06.799689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.472 [2024-07-15 12:22:06.799703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.472 [2024-07-15 12:22:06.799716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.472 [2024-07-15 12:22:06.799730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.472 [2024-07-15 12:22:06.799737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:23.472 [2024-07-15 12:22:06.799758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2376fd0 (9): Bad file descriptor 00:32:23.472 [2024-07-15 12:22:06.802585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:23.472 [2024-07-15 12:22:06.838649] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:23.472 00:32:23.472 Latency(us) 00:32:23.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.472 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:23.472 Verification LBA range: start 0x0 length 0x4000 00:32:23.472 NVMe0n1 : 15.01 10807.66 42.22 502.43 0.00 11295.06 436.31 29861.62 00:32:23.472 =================================================================================================================== 00:32:23.472 Total : 10807.66 42.22 502.43 0.00 11295.06 436.31 29861.62 00:32:23.472 Received shutdown signal, test time was about 15.000000 seconds 00:32:23.472 00:32:23.472 Latency(us) 00:32:23.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.472 =================================================================================================================== 00:32:23.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:23.472 12:22:12 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1308654 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1308654 /var/tmp/bdevperf.sock 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1308654 ']' 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:23.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:23.472 [2024-07-15 12:22:13.403673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:23.472 12:22:13 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:23.758 [2024-07-15 12:22:13.584158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:23.758 12:22:13 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.014 NVMe0n1 00:32:24.270 12:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.527 00:32:24.527 12:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.784 00:32:24.784 12:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:24.784 12:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:25.040 12:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:25.296 12:22:15 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:28.563 12:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:28.563 12:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:28.563 12:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:28.563 12:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1309580 00:32:28.563 12:22:18 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1309580 00:32:29.491 0 00:32:29.491 12:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:29.491 [2024-07-15 12:22:13.048435] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:32:29.491 [2024-07-15 12:22:13.048485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308654 ] 00:32:29.491 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.491 [2024-07-15 12:22:13.115434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.491 [2024-07-15 12:22:13.152009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.491 [2024-07-15 12:22:15.059462] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:29.491 [2024-07-15 12:22:15.059506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:29.491 [2024-07-15 12:22:15.059516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:29.491 [2024-07-15 12:22:15.059525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:29.491 [2024-07-15 12:22:15.059532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:29.491 [2024-07-15 12:22:15.059539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:29.491 [2024-07-15 12:22:15.059545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:29.492 [2024-07-15 12:22:15.059552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:29.492 [2024-07-15 12:22:15.059559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:29.492 [2024-07-15 12:22:15.059565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:29.492 [2024-07-15 12:22:15.059590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:29.492 [2024-07-15 12:22:15.059603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc52fd0 (9): Bad file descriptor 00:32:29.492 [2024-07-15 12:22:15.067040] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:29.492 Running I/O for 1 seconds... 00:32:29.492 00:32:29.492 Latency(us) 00:32:29.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.492 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:29.492 Verification LBA range: start 0x0 length 0x4000 00:32:29.492 NVMe0n1 : 1.01 10818.42 42.26 0.00 0.00 11787.77 2535.96 11397.57 00:32:29.492 =================================================================================================================== 00:32:29.492 Total : 10818.42 42.26 0.00 0.00 11787.77 2535.96 11397.57 00:32:29.492 12:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:29.492 12:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:29.748 12:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:30.005 12:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:30.005 12:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:30.005 12:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:30.261 12:22:20 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1308654 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1308654 ']' 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1308654 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1308654 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1308654' 00:32:33.531 killing process with pid 1308654 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1308654 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1308654 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:33.531 12:22:23 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:33.787 rmmod nvme_tcp 00:32:33.787 rmmod nvme_fabrics 00:32:33.787 rmmod nvme_keyring 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1305659 ']' 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1305659 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1305659 ']' 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1305659 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:33.787 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1305659 00:32:34.046 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:34.046 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:34.046 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1305659' 00:32:34.046 killing process with pid 1305659 00:32:34.046 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1305659 00:32:34.046 12:22:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1305659 00:32:34.046 12:22:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:34.046 12:22:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:34.046 12:22:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:34.046 12:22:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:34.046 12:22:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:34.046 12:22:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.046 12:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:34.046 12:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.578 12:22:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:36.578 00:32:36.578 real 0m37.526s 00:32:36.578 user 1m59.076s 00:32:36.578 sys 0m7.552s 00:32:36.578 12:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:36.578 12:22:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:36.578 ************************************ 00:32:36.578 END TEST nvmf_failover 00:32:36.578 ************************************ 00:32:36.578 12:22:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:36.578 12:22:26 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:36.578 12:22:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:36.578 12:22:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:36.578 12:22:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:36.578 ************************************ 00:32:36.578 START TEST nvmf_host_discovery 00:32:36.578 ************************************ 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:36.578 * Looking for test storage... 00:32:36.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.578 12:22:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:32:36.579 12:22:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:41.874 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:41.874 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:41.874 Found net devices under 0000:86:00.0: cvl_0_0 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:41.874 Found net devices under 0000:86:00.1: cvl_0_1 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.874 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:42.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:32:42.133 00:32:42.133 --- 10.0.0.2 ping statistics --- 00:32:42.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.133 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:32:42.133 00:32:42.133 --- 10.0.0.1 ping statistics --- 00:32:42.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.133 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:42.133 12:22:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1313827 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1313827 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1313827 ']' 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:42.133 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.133 [2024-07-15 12:22:32.076856] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:32:42.133 [2024-07-15 12:22:32.076898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.133 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.392 [2024-07-15 12:22:32.149386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.392 [2024-07-15 12:22:32.189502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.392 [2024-07-15 12:22:32.189542] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.392 [2024-07-15 12:22:32.189549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.392 [2024-07-15 12:22:32.189555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.392 [2024-07-15 12:22:32.189560] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.392 [2024-07-15 12:22:32.189598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.392 [2024-07-15 12:22:32.318761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.392 [2024-07-15 12:22:32.330921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.392 null0 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.392 null1 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1314016 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1314016 /tmp/host.sock 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1314016 ']' 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:42.392 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:42.392 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.650 [2024-07-15 12:22:32.407055] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:32:42.650 [2024-07-15 12:22:32.407099] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314016 ] 00:32:42.650 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.650 [2024-07-15 12:22:32.474219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.650 [2024-07-15 12:22:32.515642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:42.650 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:42.908 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.909 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.166 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.167 [2024-07-15 12:22:32.944477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.167 12:22:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:32:43.167 12:22:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:32:43.730 [2024-07-15 12:22:33.666299] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:43.731 [2024-07-15 12:22:33.666318] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:43.731 [2024-07-15 12:22:33.666331] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:43.988 [2024-07-15 12:22:33.752600] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:43.988 [2024-07-15 12:22:33.849415] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:43.988 [2024-07-15 12:22:33.849433] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.245 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.527 [2024-07-15 12:22:34.460587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:44.527 [2024-07-15 12:22:34.460922] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:44.527 [2024-07-15 12:22:34.460943] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:44.527 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.799 [2024-07-15 12:22:34.590345] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:44.799 12:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:32:44.799 [2024-07-15 12:22:34.651890] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:44.799 [2024-07-15 12:22:34.651907] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:44.799 [2024-07-15 12:22:34.651912] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.753 [2024-07-15 12:22:35.700935] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:45.753 [2024-07-15 12:22:35.700957] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:45.753 [2024-07-15 12:22:35.701783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.753 [2024-07-15 12:22:35.701797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.753 [2024-07-15 12:22:35.701805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.753 [2024-07-15 12:22:35.701812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.753 [2024-07-15 12:22:35.701835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.753 [2024-07-15 12:22:35.701842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.753 [2024-07-15 12:22:35.701850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:45.753 [2024-07-15 12:22:35.701857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.753 [2024-07-15 12:22:35.701863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662a90 is same with the state(5) to be set 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:45.753 [2024-07-15 12:22:35.711795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x662a90 (9): Bad file descriptor 00:32:45.753 [2024-07-15 12:22:35.721834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.753 [2024-07-15 12:22:35.722108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.753 [2024-07-15 12:22:35.722122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x662a90 with addr=10.0.0.2, port=4420 00:32:45.753 [2024-07-15 12:22:35.722130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662a90 is same with the state(5) to be set 00:32:45.753 [2024-07-15 12:22:35.722141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x662a90 (9): Bad file descriptor 00:32:45.753 [2024-07-15 12:22:35.722157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.753 [2024-07-15 12:22:35.722168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.753 [2024-07-15 12:22:35.722175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.753 [2024-07-15 12:22:35.722185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.753 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.753 [2024-07-15 12:22:35.731893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.753 [2024-07-15 12:22:35.732089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.753 [2024-07-15 12:22:35.732104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x662a90 with addr=10.0.0.2, port=4420 00:32:45.753 [2024-07-15 12:22:35.732111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662a90 is same with the state(5) to be set 00:32:45.753 [2024-07-15 12:22:35.732123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x662a90 (9): Bad file descriptor 00:32:45.753 [2024-07-15 12:22:35.732132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.753 [2024-07-15 12:22:35.732139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.753 [2024-07-15 12:22:35.732146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.753 [2024-07-15 12:22:35.732156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.753 [2024-07-15 12:22:35.741950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:45.753 [2024-07-15 12:22:35.742175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.753 [2024-07-15 12:22:35.742187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x662a90 with addr=10.0.0.2, port=4420 00:32:45.753 [2024-07-15 12:22:35.742194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662a90 is same with the state(5) to be set 00:32:45.753 [2024-07-15 12:22:35.742204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x662a90 (9): Bad file descriptor 00:32:45.753 [2024-07-15 12:22:35.742219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.753 [2024-07-15 12:22:35.742231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:45.753 [2024-07-15 12:22:35.742238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.753 [2024-07-15 12:22:35.742248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.011 [2024-07-15 12:22:35.752001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:46.011 [2024-07-15 12:22:35.752200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.011 [2024-07-15 12:22:35.752218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x662a90 with addr=10.0.0.2, port=4420 00:32:46.011 [2024-07-15 12:22:35.752232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662a90 is same with the state(5) to be set 00:32:46.011 [2024-07-15 12:22:35.752244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x662a90 (9): Bad file descriptor 00:32:46.011 [2024-07-15 12:22:35.752264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:46.011 [2024-07-15 12:22:35.752271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:46.011 [2024-07-15 12:22:35.752278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:46.011 [2024-07-15 12:22:35.752291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.011 [2024-07-15 12:22:35.762062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:46.011 [2024-07-15 12:22:35.762269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.011 [2024-07-15 12:22:35.762284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x662a90 with addr=10.0.0.2, port=4420 00:32:46.011 [2024-07-15 12:22:35.762294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662a90 is same with the state(5) to be set 00:32:46.011 [2024-07-15 12:22:35.762307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x662a90 (9): Bad file descriptor 00:32:46.011 [2024-07-15 12:22:35.762323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:46.011 [2024-07-15 12:22:35.762330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:46.011 [2024-07-15 12:22:35.762337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:46.011 [2024-07-15 12:22:35.762346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.011 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:46.011 [2024-07-15 12:22:35.772118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:46.011 [2024-07-15 12:22:35.772334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.011 [2024-07-15 12:22:35.772346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x662a90 with addr=10.0.0.2, port=4420 00:32:46.011 [2024-07-15 12:22:35.772353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662a90 is same with the state(5) to be set 00:32:46.011 [2024-07-15 12:22:35.772363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x662a90 (9): Bad file descriptor 00:32:46.011 [2024-07-15 12:22:35.772373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:46.011 [2024-07-15 12:22:35.772379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:46.011 [2024-07-15 12:22:35.772386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:46.011 [2024-07-15 12:22:35.772395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.011 [2024-07-15 12:22:35.782170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:46.011 [2024-07-15 12:22:35.782427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.011 [2024-07-15 12:22:35.782440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x662a90 with addr=10.0.0.2, port=4420 00:32:46.011 [2024-07-15 12:22:35.782446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x662a90 is same with the state(5) to be set 00:32:46.012 [2024-07-15 12:22:35.782456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x662a90 (9): Bad file descriptor 00:32:46.012 [2024-07-15 12:22:35.782472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:46.012 [2024-07-15 12:22:35.782479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:46.012 [2024-07-15 12:22:35.782485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:46.012 [2024-07-15 12:22:35.782494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.012 [2024-07-15 12:22:35.787489] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:46.012 [2024-07-15 12:22:35.787504] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:46.012 12:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.012 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:32:46.012 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.269 12:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.198 [2024-07-15 12:22:37.115659] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:47.198 [2024-07-15 12:22:37.115675] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:47.198 [2024-07-15 12:22:37.115686] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:47.455 [2024-07-15 12:22:37.244083] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:47.713 [2024-07-15 12:22:37.512855] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:47.713 [2024-07-15 12:22:37.512882] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.713 request: 00:32:47.713 { 00:32:47.713 "name": "nvme", 00:32:47.713 "trtype": "tcp", 00:32:47.713 "traddr": "10.0.0.2", 00:32:47.713 "adrfam": "ipv4", 00:32:47.713 "trsvcid": "8009", 00:32:47.713 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:47.713 "wait_for_attach": true, 00:32:47.713 "method": "bdev_nvme_start_discovery", 00:32:47.713 "req_id": 1 00:32:47.713 } 00:32:47.713 Got JSON-RPC error response 00:32:47.713 response: 00:32:47.713 { 00:32:47.713 "code": -17, 00:32:47.713 "message": "File exists" 00:32:47.713 } 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.713 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.713 request: 00:32:47.713 { 00:32:47.713 "name": "nvme_second", 00:32:47.713 "trtype": "tcp", 00:32:47.713 "traddr": "10.0.0.2", 00:32:47.713 "adrfam": "ipv4", 00:32:47.713 "trsvcid": "8009", 00:32:47.713 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:47.713 "wait_for_attach": true, 00:32:47.713 "method": "bdev_nvme_start_discovery", 00:32:47.713 "req_id": 1 00:32:47.713 } 00:32:47.713 Got JSON-RPC error response 00:32:47.713 response: 00:32:47.713 { 00:32:47.713 "code": -17, 00:32:47.713 "message": "File exists" 00:32:47.714 } 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.714 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.970 12:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.899 [2024-07-15 12:22:38.760383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.899 [2024-07-15 12:22:38.760410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a0970 with addr=10.0.0.2, port=8010 00:32:48.899 [2024-07-15 12:22:38.760424] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:48.899 [2024-07-15 12:22:38.760434] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:48.899 [2024-07-15 12:22:38.760440] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:49.830 [2024-07-15 12:22:39.762760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.830 [2024-07-15 12:22:39.762784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x661030 with addr=10.0.0.2, port=8010 00:32:49.830 [2024-07-15 12:22:39.762795] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:49.830 [2024-07-15 12:22:39.762801] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:49.830 [2024-07-15 12:22:39.762806] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:51.202 [2024-07-15 12:22:40.764975] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:51.202 request: 00:32:51.202 { 00:32:51.202 "name": "nvme_second", 00:32:51.202 "trtype": "tcp", 00:32:51.202 "traddr": "10.0.0.2", 00:32:51.202 "adrfam": "ipv4", 00:32:51.202 "trsvcid": "8010", 00:32:51.202 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:51.202 "wait_for_attach": false, 00:32:51.202 "attach_timeout_ms": 3000, 00:32:51.202 "method": "bdev_nvme_start_discovery", 00:32:51.202 "req_id": 1 00:32:51.202 } 00:32:51.202 Got JSON-RPC error response 00:32:51.202 response: 00:32:51.202 { 00:32:51.202 "code": -110, 00:32:51.202 "message": "Connection timed out" 00:32:51.202 } 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1314016 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:51.202 rmmod nvme_tcp 00:32:51.202 rmmod nvme_fabrics 00:32:51.202 rmmod nvme_keyring 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1313827 ']' 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1313827 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1313827 ']' 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1313827 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1313827 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1313827' 00:32:51.202 killing process with pid 1313827 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1313827 00:32:51.202 12:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1313827 00:32:51.202 12:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:51.202 12:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:51.202 12:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:51.202 12:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:51.202 12:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:51.202 12:22:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.203 12:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:51.203 12:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:53.738 00:32:53.738 real 0m17.016s 00:32:53.738 user 0m20.603s 00:32:53.738 sys 0m5.609s 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:53.738 ************************************ 00:32:53.738 END TEST nvmf_host_discovery 00:32:53.738 ************************************ 00:32:53.738 12:22:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:53.738 12:22:43 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:53.738 12:22:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:53.738 12:22:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:53.738 12:22:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:53.738 ************************************ 00:32:53.738 START TEST nvmf_host_multipath_status 00:32:53.738 ************************************ 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:53.738 * Looking for test storage... 00:32:53.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:53.738 12:22:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:59.028 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:59.028 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:59.028 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:59.029 Found net devices under 0000:86:00.0: cvl_0_0 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:59.029 Found net devices under 0000:86:00.1: cvl_0_1 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:59.029 12:22:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:59.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:32:59.287 00:32:59.287 --- 10.0.0.2 ping statistics --- 00:32:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.287 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:32:59.287 00:32:59.287 --- 10.0.0.1 ping statistics --- 00:32:59.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.287 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1318883 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1318883 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1318883 ']' 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:59.287 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.287 [2024-07-15 12:22:49.160013] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:32:59.287 [2024-07-15 12:22:49.160054] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.287 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.287 [2024-07-15 12:22:49.230217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:59.287 [2024-07-15 12:22:49.271420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.287 [2024-07-15 12:22:49.271459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.287 [2024-07-15 12:22:49.271467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:59.287 [2024-07-15 12:22:49.271473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:59.287 [2024-07-15 12:22:49.271478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.288 [2024-07-15 12:22:49.271522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.288 [2024-07-15 12:22:49.271524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.220 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:00.220 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:33:00.220 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:00.220 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:00.220 12:22:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:00.220 12:22:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.220 12:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1318883 00:33:00.220 12:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:00.220 [2024-07-15 12:22:50.166896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.220 12:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:00.478 Malloc0 00:33:00.478 12:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:00.735 12:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:00.992 12:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.992 [2024-07-15 12:22:50.917042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.992 12:22:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:01.249 [2024-07-15 12:22:51.109550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1319278 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1319278 /var/tmp/bdevperf.sock 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1319278 ']' 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:01.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:01.249 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:01.506 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.506 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:33:01.506 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:01.763 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:02.020 Nvme0n1 00:33:02.020 12:22:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:02.277 Nvme0n1 00:33:02.277 12:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:02.277 12:22:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:04.856 12:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:04.856 12:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:04.856 12:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:04.856 12:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:05.787 12:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:05.787 12:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:05.787 12:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.787 12:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:06.045 12:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.045 12:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:06.045 12:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.045 12:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:06.302 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:06.302 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:06.302 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:06.302 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.302 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.302 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:06.302 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.302 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:06.560 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.560 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:06.560 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:06.560 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.819 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.819 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:06.819 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.819 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:07.077 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.077 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:07.077 12:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:07.077 12:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:07.334 12:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:08.266 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:08.266 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:08.266 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.266 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:08.523 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.523 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:08.523 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.523 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:08.781 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.781 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:08.781 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.781 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:08.781 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.781 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:08.781 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.781 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:09.038 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.038 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:09.038 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.038 12:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:09.296 12:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.296 12:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:09.296 12:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.296 12:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:09.553 12:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:09.553 12:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:09.553 12:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:09.553 12:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:09.810 12:22:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:10.742 12:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:10.742 12:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:10.742 12:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.742 12:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.999 12:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.999 12:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:10.999 12:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.999 12:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:11.256 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.256 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:11.256 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:11.256 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.512 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.512 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:11.512 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.512 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:11.512 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.512 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:11.512 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.512 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.768 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.768 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:11.768 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.768 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:12.025 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.025 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:12.025 12:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:12.025 12:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:12.281 12:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.661 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.919 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.919 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.919 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.919 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.177 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.177 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:14.177 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.177 12:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.177 12:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.177 12:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:14.177 12:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.177 12:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:14.434 12:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.434 12:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:14.434 12:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:14.691 12:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:14.948 12:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:15.880 12:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:15.880 12:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:15.880 12:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.880 12:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.137 12:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.137 12:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:16.137 12:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.137 12:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:16.137 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.137 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:16.137 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.137 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:16.394 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.394 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:16.394 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.394 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:16.652 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.652 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:16.652 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.652 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:16.652 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.652 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:16.652 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.652 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:16.910 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.910 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:16.910 12:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:17.168 12:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:17.426 12:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:18.358 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:18.358 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:18.358 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.358 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:18.615 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:18.615 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:18.616 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.616 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:18.616 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.616 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:18.616 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.616 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:18.873 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.873 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:18.873 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.873 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.132 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.132 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:19.132 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.132 12:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.430 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.430 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:19.430 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.430 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:19.430 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.430 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:19.690 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:19.690 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:19.948 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:19.948 12:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:21.322 12:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:21.322 12:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:21.322 12:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.322 12:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:21.322 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.322 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:21.322 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.322 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.579 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.579 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.579 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.579 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.579 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.579 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.579 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.579 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:21.838 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.838 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:21.838 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.838 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:22.095 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.095 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:22.095 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.095 12:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:22.095 12:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.095 12:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:22.095 12:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:22.353 12:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:22.611 12:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:23.543 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:23.543 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:23.543 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.543 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:23.801 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:23.801 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:23.801 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.801 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.058 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.058 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:24.058 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.058 12:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:24.315 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.315 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:24.315 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.315 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:24.315 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.315 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:24.315 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.315 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:24.572 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.572 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:24.572 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.572 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.830 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.830 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:24.830 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:25.087 12:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:25.088 12:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:26.461 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:26.461 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:26.461 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.461 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:26.461 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.461 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:26.461 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.461 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:26.718 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.718 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:26.718 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.718 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.719 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.719 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.719 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.719 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:26.975 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.975 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:26.975 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.975 12:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:27.232 12:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.232 12:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:27.232 12:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.232 12:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:27.489 12:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.489 12:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:27.489 12:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:27.489 12:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:27.747 12:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:28.679 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:28.679 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:28.679 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.679 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:28.936 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.936 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:28.936 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.936 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:29.193 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.193 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:29.193 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.193 12:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:29.193 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.193 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:29.193 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:29.193 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.451 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.451 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:29.451 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:29.451 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.709 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.709 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:29.709 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.709 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1319278 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1319278 ']' 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1319278 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1319278 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1319278' 00:33:29.967 killing process with pid 1319278 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1319278 00:33:29.967 12:23:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1319278 00:33:29.967 Connection closed with partial response: 00:33:29.967 00:33:29.967 00:33:30.228 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1319278 00:33:30.228 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:30.228 [2024-07-15 12:22:51.181696] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:33:30.228 [2024-07-15 12:22:51.181747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1319278 ] 00:33:30.228 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.228 [2024-07-15 12:22:51.250358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.228 [2024-07-15 12:22:51.290258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:30.228 Running I/O for 90 seconds... 00:33:30.228 [2024-07-15 12:23:04.528130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.228 [2024-07-15 12:23:04.528168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.528528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.528535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.529229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.529243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.529258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.529265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.529279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.529286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.529303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.529310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.529323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.529329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.529343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.529349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.529366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.529373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.529386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.529392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:30.228 [2024-07-15 12:23:04.529406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.228 [2024-07-15 12:23:04.529412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.529981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.529987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:30.229 [2024-07-15 12:23:04.530322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.229 [2024-07-15 12:23:04.530328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.530991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.530997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.230 [2024-07-15 12:23:04.531364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:30.230 [2024-07-15 12:23:04.531381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:04.531812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:04.531820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.585443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.231 [2024-07-15 12:23:17.585450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.586232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:17.586244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.586257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:17.586264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.586281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:17.586289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.586301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:17.586307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.586320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:17.586326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.586338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:17.586345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.586357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.231 [2024-07-15 12:23:17.586364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:30.231 [2024-07-15 12:23:17.586377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.586384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:30.232 [2024-07-15 12:23:17.588738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:30.232 [2024-07-15 12:23:17.588745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:30.232 Received shutdown signal, test time was about 27.429509 seconds 00:33:30.232 00:33:30.232 Latency(us) 00:33:30.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.232 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:30.232 Verification LBA range: start 0x0 length 0x4000 00:33:30.232 Nvme0n1 : 27.43 10289.43 40.19 0.00 0.00 12420.32 455.90 3019898.88 00:33:30.232 =================================================================================================================== 00:33:30.232 Total : 10289.43 40.19 0.00 0.00 12420.32 455.90 3019898.88 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:30.232 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:30.232 rmmod nvme_tcp 00:33:30.490 rmmod nvme_fabrics 00:33:30.490 rmmod nvme_keyring 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1318883 ']' 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1318883 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1318883 ']' 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1318883 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1318883 00:33:30.490 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:30.491 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:30.491 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1318883' 00:33:30.491 killing process with pid 1318883 00:33:30.491 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1318883 00:33:30.491 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1318883 00:33:30.749 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:30.749 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:30.749 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:30.749 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:30.749 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:30.749 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.749 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:30.749 12:23:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.654 12:23:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:32.654 00:33:32.654 real 0m39.328s 00:33:32.654 user 1m45.529s 00:33:32.654 sys 0m10.899s 00:33:32.654 12:23:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:32.654 12:23:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:32.654 ************************************ 00:33:32.654 END TEST nvmf_host_multipath_status 00:33:32.654 ************************************ 00:33:32.654 12:23:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:32.654 12:23:22 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:32.654 12:23:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:32.654 12:23:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:32.654 12:23:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.654 ************************************ 00:33:32.654 START TEST nvmf_discovery_remove_ifc 00:33:32.654 ************************************ 00:33:32.654 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:32.913 * Looking for test storage... 00:33:32.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.913 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:33:32.914 12:23:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:39.513 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:39.513 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:39.513 Found net devices under 0000:86:00.0: cvl_0_0 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.513 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:39.514 Found net devices under 0000:86:00.1: cvl_0_1 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:39.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:33:39.514 00:33:39.514 --- 10.0.0.2 ping statistics --- 00:33:39.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.514 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:33:39.514 00:33:39.514 --- 10.0.0.1 ping statistics --- 00:33:39.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.514 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1327442 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1327442 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1327442 ']' 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.514 [2024-07-15 12:23:28.573137] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:33:39.514 [2024-07-15 12:23:28.573185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.514 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.514 [2024-07-15 12:23:28.646888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.514 [2024-07-15 12:23:28.686874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.514 [2024-07-15 12:23:28.686911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.514 [2024-07-15 12:23:28.686918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.514 [2024-07-15 12:23:28.686924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.514 [2024-07-15 12:23:28.686929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.514 [2024-07-15 12:23:28.686946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.514 [2024-07-15 12:23:28.823966] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.514 [2024-07-15 12:23:28.832099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:39.514 null0 00:33:39.514 [2024-07-15 12:23:28.864102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1327615 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1327615 /tmp/host.sock 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1327615 ']' 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:39.514 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:39.514 12:23:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.514 [2024-07-15 12:23:28.932184] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:33:39.514 [2024-07-15 12:23:28.932229] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327615 ] 00:33:39.514 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.514 [2024-07-15 12:23:29.001088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.514 [2024-07-15 12:23:29.042633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.514 12:23:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.448 [2024-07-15 12:23:30.210694] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:40.448 [2024-07-15 12:23:30.210713] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:40.448 [2024-07-15 12:23:30.210726] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:40.448 [2024-07-15 12:23:30.337109] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:40.705 [2024-07-15 12:23:30.515284] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:40.705 [2024-07-15 12:23:30.515329] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:40.705 [2024-07-15 12:23:30.515351] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:40.705 [2024-07-15 12:23:30.515365] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:40.705 [2024-07-15 12:23:30.515386] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.705 [2024-07-15 12:23:30.519596] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2039780 was disconnected and freed. delete nvme_qpair. 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.705 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.964 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:40.964 12:23:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.895 12:23:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.827 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.827 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.827 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.827 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.827 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.827 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.827 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.827 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.827 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:43.085 12:23:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:44.015 12:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:44.945 12:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:46.313 [2024-07-15 12:23:35.956692] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:46.313 [2024-07-15 12:23:35.956729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.313 [2024-07-15 12:23:35.956739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.313 [2024-07-15 12:23:35.956748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.313 [2024-07-15 12:23:35.956754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.313 [2024-07-15 12:23:35.956766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.313 [2024-07-15 12:23:35.956774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.313 [2024-07-15 12:23:35.956781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.313 [2024-07-15 12:23:35.956787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.313 [2024-07-15 12:23:35.956794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:46.313 [2024-07-15 12:23:35.956800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:46.313 [2024-07-15 12:23:35.956807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2000110 is same with the state(5) to be set 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.313 [2024-07-15 12:23:35.966714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2000110 (9): Bad file descriptor 00:33:46.313 [2024-07-15 12:23:35.976754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:46.313 12:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:47.245 [2024-07-15 12:23:36.994253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:47.245 [2024-07-15 12:23:36.994326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2000110 with addr=10.0.0.2, port=4420 00:33:47.245 [2024-07-15 12:23:36.994355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2000110 is same with the state(5) to be set 00:33:47.245 [2024-07-15 12:23:36.994402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2000110 (9): Bad file descriptor 00:33:47.245 [2024-07-15 12:23:36.994495] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:47.245 [2024-07-15 12:23:36.994534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:47.245 [2024-07-15 12:23:36.994555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:47.245 [2024-07-15 12:23:36.994576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:47.245 12:23:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:47.245 [2024-07-15 12:23:36.994614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.245 [2024-07-15 12:23:36.994637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:47.245 12:23:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.245 12:23:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:47.245 12:23:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.245 12:23:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:47.245 12:23:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:47.245 12:23:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:47.245 12:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.245 12:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:47.245 12:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:48.177 [2024-07-15 12:23:37.997139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:48.177 [2024-07-15 12:23:37.997162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:48.177 [2024-07-15 12:23:37.997169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:48.177 [2024-07-15 12:23:37.997176] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:48.177 [2024-07-15 12:23:37.997186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:48.177 [2024-07-15 12:23:37.997204] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:48.177 [2024-07-15 12:23:37.997223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.177 [2024-07-15 12:23:37.997235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.177 [2024-07-15 12:23:37.997244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.177 [2024-07-15 12:23:37.997250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.177 [2024-07-15 12:23:37.997257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.177 [2024-07-15 12:23:37.997264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.177 [2024-07-15 12:23:37.997271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.177 [2024-07-15 12:23:37.997277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.177 [2024-07-15 12:23:37.997284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:48.177 [2024-07-15 12:23:37.997290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:48.177 [2024-07-15 12:23:37.997296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:48.177 [2024-07-15 12:23:37.997447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fff5d0 (9): Bad file descriptor 00:33:48.177 [2024-07-15 12:23:37.998458] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:48.177 [2024-07-15 12:23:37.998468] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:48.177 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.434 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.434 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.434 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.434 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.434 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.434 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.434 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.434 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:48.434 12:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:49.366 12:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:50.299 [2024-07-15 12:23:40.055791] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:50.299 [2024-07-15 12:23:40.055809] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:50.299 [2024-07-15 12:23:40.055821] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:50.299 [2024-07-15 12:23:40.183207] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:50.299 [2024-07-15 12:23:40.246354] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:50.299 [2024-07-15 12:23:40.246387] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:50.299 [2024-07-15 12:23:40.246404] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:50.299 [2024-07-15 12:23:40.246417] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:50.299 [2024-07-15 12:23:40.246424] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:50.299 [2024-07-15 12:23:40.254625] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x200d7b0 was disconnected and freed. delete nvme_qpair. 00:33:50.299 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:50.299 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.299 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:50.299 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.299 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:50.299 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:50.299 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1327615 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1327615 ']' 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1327615 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1327615 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1327615' 00:33:50.556 killing process with pid 1327615 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1327615 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1327615 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:50.556 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:50.556 rmmod nvme_tcp 00:33:50.815 rmmod nvme_fabrics 00:33:50.815 rmmod nvme_keyring 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1327442 ']' 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1327442 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1327442 ']' 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1327442 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1327442 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1327442' 00:33:50.815 killing process with pid 1327442 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1327442 00:33:50.815 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1327442 00:33:51.073 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:51.073 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:51.073 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:51.073 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:51.073 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:51.073 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.073 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:51.073 12:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.978 12:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:52.978 00:33:52.978 real 0m20.237s 00:33:52.978 user 0m24.810s 00:33:52.978 sys 0m5.527s 00:33:52.978 12:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:52.978 12:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:52.978 ************************************ 00:33:52.978 END TEST nvmf_discovery_remove_ifc 00:33:52.978 ************************************ 00:33:52.978 12:23:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:52.978 12:23:42 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:52.978 12:23:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:52.978 12:23:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.978 12:23:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.978 ************************************ 00:33:52.978 START TEST nvmf_identify_kernel_target 00:33:52.978 ************************************ 00:33:52.978 12:23:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:53.237 * Looking for test storage... 00:33:53.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:53.237 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:53.238 12:23:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:58.563 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:58.563 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:58.563 Found net devices under 0000:86:00.0: cvl_0_0 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:58.563 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:58.564 Found net devices under 0000:86:00.1: cvl_0_1 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.564 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.822 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.822 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.822 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:58.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:33:58.823 00:33:58.823 --- 10.0.0.2 ping statistics --- 00:33:58.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.823 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:33:58.823 00:33:58.823 --- 10.0.0.1 ping statistics --- 00:33:58.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.823 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:58.823 12:23:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:02.112 Waiting for block devices as requested 00:34:02.112 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:02.112 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:02.112 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:02.112 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:02.112 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:02.112 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:02.112 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:02.112 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:02.370 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:02.370 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:02.370 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:02.629 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:02.629 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:02.629 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:02.629 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:02.887 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:02.887 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:02.888 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:02.888 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:02.888 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:02.888 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:02.888 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:02.888 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:02.888 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:02.888 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:02.888 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:03.146 No valid GPT data, bailing 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:03.146 12:23:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:03.146 00:34:03.146 Discovery Log Number of Records 2, Generation counter 2 00:34:03.146 =====Discovery Log Entry 0====== 00:34:03.146 trtype: tcp 00:34:03.146 adrfam: ipv4 00:34:03.146 subtype: current discovery subsystem 00:34:03.146 treq: not specified, sq flow control disable supported 00:34:03.146 portid: 1 00:34:03.146 trsvcid: 4420 00:34:03.146 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:03.146 traddr: 10.0.0.1 00:34:03.146 eflags: none 00:34:03.146 sectype: none 00:34:03.146 =====Discovery Log Entry 1====== 00:34:03.146 trtype: tcp 00:34:03.146 adrfam: ipv4 00:34:03.146 subtype: nvme subsystem 00:34:03.146 treq: not specified, sq flow control disable supported 00:34:03.146 portid: 1 00:34:03.146 trsvcid: 4420 00:34:03.146 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:03.146 traddr: 10.0.0.1 00:34:03.146 eflags: none 00:34:03.146 sectype: none 00:34:03.146 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:03.146 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:03.146 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.146 ===================================================== 00:34:03.146 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:03.146 ===================================================== 00:34:03.146 Controller Capabilities/Features 00:34:03.146 ================================ 00:34:03.146 Vendor ID: 0000 00:34:03.146 Subsystem Vendor ID: 0000 00:34:03.146 Serial Number: 1d0497358ba2c4a3c581 00:34:03.146 Model Number: Linux 00:34:03.146 Firmware Version: 6.7.0-68 00:34:03.146 Recommended Arb Burst: 0 00:34:03.146 IEEE OUI Identifier: 00 00 00 00:34:03.146 Multi-path I/O 00:34:03.146 May have multiple subsystem ports: No 00:34:03.146 May have multiple controllers: No 00:34:03.146 Associated with SR-IOV VF: No 00:34:03.146 Max Data Transfer Size: Unlimited 00:34:03.146 Max Number of Namespaces: 0 00:34:03.146 Max Number of I/O Queues: 1024 00:34:03.146 NVMe Specification Version (VS): 1.3 00:34:03.146 NVMe Specification Version (Identify): 1.3 00:34:03.146 Maximum Queue Entries: 1024 00:34:03.146 Contiguous Queues Required: No 00:34:03.146 Arbitration Mechanisms Supported 00:34:03.146 Weighted Round Robin: Not Supported 00:34:03.146 Vendor Specific: Not Supported 00:34:03.146 Reset Timeout: 7500 ms 00:34:03.146 Doorbell Stride: 4 bytes 00:34:03.146 NVM Subsystem Reset: Not Supported 00:34:03.146 Command Sets Supported 00:34:03.146 NVM Command Set: Supported 00:34:03.146 Boot Partition: Not Supported 00:34:03.146 Memory Page Size Minimum: 4096 bytes 00:34:03.146 Memory Page Size Maximum: 4096 bytes 00:34:03.146 Persistent Memory Region: Not Supported 00:34:03.146 Optional Asynchronous Events Supported 00:34:03.146 Namespace Attribute Notices: Not Supported 00:34:03.146 Firmware Activation Notices: Not Supported 00:34:03.146 ANA Change Notices: Not Supported 00:34:03.146 PLE Aggregate Log Change Notices: Not Supported 00:34:03.146 LBA Status Info Alert Notices: Not Supported 00:34:03.146 EGE Aggregate Log Change Notices: Not Supported 00:34:03.146 Normal NVM Subsystem Shutdown event: Not Supported 00:34:03.146 Zone Descriptor Change Notices: Not Supported 00:34:03.146 Discovery Log Change Notices: Supported 00:34:03.146 Controller Attributes 00:34:03.146 128-bit Host Identifier: Not Supported 00:34:03.146 Non-Operational Permissive Mode: Not Supported 00:34:03.146 NVM Sets: Not Supported 00:34:03.146 Read Recovery Levels: Not Supported 00:34:03.146 Endurance Groups: Not Supported 00:34:03.146 Predictable Latency Mode: Not Supported 00:34:03.146 Traffic Based Keep ALive: Not Supported 00:34:03.146 Namespace Granularity: Not Supported 00:34:03.146 SQ Associations: Not Supported 00:34:03.146 UUID List: Not Supported 00:34:03.146 Multi-Domain Subsystem: Not Supported 00:34:03.146 Fixed Capacity Management: Not Supported 00:34:03.146 Variable Capacity Management: Not Supported 00:34:03.146 Delete Endurance Group: Not Supported 00:34:03.146 Delete NVM Set: Not Supported 00:34:03.146 Extended LBA Formats Supported: Not Supported 00:34:03.146 Flexible Data Placement Supported: Not Supported 00:34:03.146 00:34:03.146 Controller Memory Buffer Support 00:34:03.146 ================================ 00:34:03.146 Supported: No 00:34:03.146 00:34:03.146 Persistent Memory Region Support 00:34:03.146 ================================ 00:34:03.146 Supported: No 00:34:03.146 00:34:03.146 Admin Command Set Attributes 00:34:03.146 ============================ 00:34:03.146 Security Send/Receive: Not Supported 00:34:03.146 Format NVM: Not Supported 00:34:03.146 Firmware Activate/Download: Not Supported 00:34:03.146 Namespace Management: Not Supported 00:34:03.146 Device Self-Test: Not Supported 00:34:03.146 Directives: Not Supported 00:34:03.146 NVMe-MI: Not Supported 00:34:03.146 Virtualization Management: Not Supported 00:34:03.146 Doorbell Buffer Config: Not Supported 00:34:03.146 Get LBA Status Capability: Not Supported 00:34:03.146 Command & Feature Lockdown Capability: Not Supported 00:34:03.146 Abort Command Limit: 1 00:34:03.146 Async Event Request Limit: 1 00:34:03.146 Number of Firmware Slots: N/A 00:34:03.146 Firmware Slot 1 Read-Only: N/A 00:34:03.146 Firmware Activation Without Reset: N/A 00:34:03.146 Multiple Update Detection Support: N/A 00:34:03.146 Firmware Update Granularity: No Information Provided 00:34:03.146 Per-Namespace SMART Log: No 00:34:03.146 Asymmetric Namespace Access Log Page: Not Supported 00:34:03.146 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:03.146 Command Effects Log Page: Not Supported 00:34:03.146 Get Log Page Extended Data: Supported 00:34:03.146 Telemetry Log Pages: Not Supported 00:34:03.146 Persistent Event Log Pages: Not Supported 00:34:03.146 Supported Log Pages Log Page: May Support 00:34:03.146 Commands Supported & Effects Log Page: Not Supported 00:34:03.146 Feature Identifiers & Effects Log Page:May Support 00:34:03.146 NVMe-MI Commands & Effects Log Page: May Support 00:34:03.146 Data Area 4 for Telemetry Log: Not Supported 00:34:03.146 Error Log Page Entries Supported: 1 00:34:03.146 Keep Alive: Not Supported 00:34:03.146 00:34:03.146 NVM Command Set Attributes 00:34:03.146 ========================== 00:34:03.146 Submission Queue Entry Size 00:34:03.146 Max: 1 00:34:03.146 Min: 1 00:34:03.146 Completion Queue Entry Size 00:34:03.146 Max: 1 00:34:03.146 Min: 1 00:34:03.146 Number of Namespaces: 0 00:34:03.146 Compare Command: Not Supported 00:34:03.146 Write Uncorrectable Command: Not Supported 00:34:03.146 Dataset Management Command: Not Supported 00:34:03.146 Write Zeroes Command: Not Supported 00:34:03.146 Set Features Save Field: Not Supported 00:34:03.146 Reservations: Not Supported 00:34:03.147 Timestamp: Not Supported 00:34:03.147 Copy: Not Supported 00:34:03.147 Volatile Write Cache: Not Present 00:34:03.147 Atomic Write Unit (Normal): 1 00:34:03.147 Atomic Write Unit (PFail): 1 00:34:03.147 Atomic Compare & Write Unit: 1 00:34:03.147 Fused Compare & Write: Not Supported 00:34:03.147 Scatter-Gather List 00:34:03.147 SGL Command Set: Supported 00:34:03.147 SGL Keyed: Not Supported 00:34:03.147 SGL Bit Bucket Descriptor: Not Supported 00:34:03.147 SGL Metadata Pointer: Not Supported 00:34:03.147 Oversized SGL: Not Supported 00:34:03.147 SGL Metadata Address: Not Supported 00:34:03.147 SGL Offset: Supported 00:34:03.147 Transport SGL Data Block: Not Supported 00:34:03.147 Replay Protected Memory Block: Not Supported 00:34:03.147 00:34:03.147 Firmware Slot Information 00:34:03.147 ========================= 00:34:03.147 Active slot: 0 00:34:03.147 00:34:03.147 00:34:03.147 Error Log 00:34:03.147 ========= 00:34:03.147 00:34:03.147 Active Namespaces 00:34:03.147 ================= 00:34:03.147 Discovery Log Page 00:34:03.147 ================== 00:34:03.147 Generation Counter: 2 00:34:03.147 Number of Records: 2 00:34:03.147 Record Format: 0 00:34:03.147 00:34:03.147 Discovery Log Entry 0 00:34:03.147 ---------------------- 00:34:03.147 Transport Type: 3 (TCP) 00:34:03.147 Address Family: 1 (IPv4) 00:34:03.147 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:03.147 Entry Flags: 00:34:03.147 Duplicate Returned Information: 0 00:34:03.147 Explicit Persistent Connection Support for Discovery: 0 00:34:03.147 Transport Requirements: 00:34:03.147 Secure Channel: Not Specified 00:34:03.147 Port ID: 1 (0x0001) 00:34:03.147 Controller ID: 65535 (0xffff) 00:34:03.147 Admin Max SQ Size: 32 00:34:03.147 Transport Service Identifier: 4420 00:34:03.147 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:03.147 Transport Address: 10.0.0.1 00:34:03.147 Discovery Log Entry 1 00:34:03.147 ---------------------- 00:34:03.147 Transport Type: 3 (TCP) 00:34:03.147 Address Family: 1 (IPv4) 00:34:03.147 Subsystem Type: 2 (NVM Subsystem) 00:34:03.147 Entry Flags: 00:34:03.147 Duplicate Returned Information: 0 00:34:03.147 Explicit Persistent Connection Support for Discovery: 0 00:34:03.147 Transport Requirements: 00:34:03.147 Secure Channel: Not Specified 00:34:03.147 Port ID: 1 (0x0001) 00:34:03.147 Controller ID: 65535 (0xffff) 00:34:03.147 Admin Max SQ Size: 32 00:34:03.147 Transport Service Identifier: 4420 00:34:03.147 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:03.147 Transport Address: 10.0.0.1 00:34:03.147 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:03.147 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.405 get_feature(0x01) failed 00:34:03.405 get_feature(0x02) failed 00:34:03.405 get_feature(0x04) failed 00:34:03.405 ===================================================== 00:34:03.405 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:03.405 ===================================================== 00:34:03.405 Controller Capabilities/Features 00:34:03.405 ================================ 00:34:03.405 Vendor ID: 0000 00:34:03.405 Subsystem Vendor ID: 0000 00:34:03.405 Serial Number: 3d8b1678bcb860f674f3 00:34:03.405 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:03.405 Firmware Version: 6.7.0-68 00:34:03.405 Recommended Arb Burst: 6 00:34:03.405 IEEE OUI Identifier: 00 00 00 00:34:03.405 Multi-path I/O 00:34:03.405 May have multiple subsystem ports: Yes 00:34:03.405 May have multiple controllers: Yes 00:34:03.405 Associated with SR-IOV VF: No 00:34:03.405 Max Data Transfer Size: Unlimited 00:34:03.405 Max Number of Namespaces: 1024 00:34:03.405 Max Number of I/O Queues: 128 00:34:03.405 NVMe Specification Version (VS): 1.3 00:34:03.405 NVMe Specification Version (Identify): 1.3 00:34:03.405 Maximum Queue Entries: 1024 00:34:03.405 Contiguous Queues Required: No 00:34:03.405 Arbitration Mechanisms Supported 00:34:03.405 Weighted Round Robin: Not Supported 00:34:03.405 Vendor Specific: Not Supported 00:34:03.405 Reset Timeout: 7500 ms 00:34:03.405 Doorbell Stride: 4 bytes 00:34:03.405 NVM Subsystem Reset: Not Supported 00:34:03.405 Command Sets Supported 00:34:03.405 NVM Command Set: Supported 00:34:03.405 Boot Partition: Not Supported 00:34:03.405 Memory Page Size Minimum: 4096 bytes 00:34:03.405 Memory Page Size Maximum: 4096 bytes 00:34:03.405 Persistent Memory Region: Not Supported 00:34:03.405 Optional Asynchronous Events Supported 00:34:03.405 Namespace Attribute Notices: Supported 00:34:03.405 Firmware Activation Notices: Not Supported 00:34:03.405 ANA Change Notices: Supported 00:34:03.405 PLE Aggregate Log Change Notices: Not Supported 00:34:03.405 LBA Status Info Alert Notices: Not Supported 00:34:03.405 EGE Aggregate Log Change Notices: Not Supported 00:34:03.405 Normal NVM Subsystem Shutdown event: Not Supported 00:34:03.405 Zone Descriptor Change Notices: Not Supported 00:34:03.405 Discovery Log Change Notices: Not Supported 00:34:03.405 Controller Attributes 00:34:03.405 128-bit Host Identifier: Supported 00:34:03.405 Non-Operational Permissive Mode: Not Supported 00:34:03.405 NVM Sets: Not Supported 00:34:03.405 Read Recovery Levels: Not Supported 00:34:03.405 Endurance Groups: Not Supported 00:34:03.405 Predictable Latency Mode: Not Supported 00:34:03.405 Traffic Based Keep ALive: Supported 00:34:03.405 Namespace Granularity: Not Supported 00:34:03.405 SQ Associations: Not Supported 00:34:03.405 UUID List: Not Supported 00:34:03.405 Multi-Domain Subsystem: Not Supported 00:34:03.405 Fixed Capacity Management: Not Supported 00:34:03.405 Variable Capacity Management: Not Supported 00:34:03.405 Delete Endurance Group: Not Supported 00:34:03.405 Delete NVM Set: Not Supported 00:34:03.405 Extended LBA Formats Supported: Not Supported 00:34:03.405 Flexible Data Placement Supported: Not Supported 00:34:03.405 00:34:03.405 Controller Memory Buffer Support 00:34:03.405 ================================ 00:34:03.405 Supported: No 00:34:03.405 00:34:03.405 Persistent Memory Region Support 00:34:03.405 ================================ 00:34:03.405 Supported: No 00:34:03.405 00:34:03.405 Admin Command Set Attributes 00:34:03.405 ============================ 00:34:03.405 Security Send/Receive: Not Supported 00:34:03.405 Format NVM: Not Supported 00:34:03.405 Firmware Activate/Download: Not Supported 00:34:03.405 Namespace Management: Not Supported 00:34:03.405 Device Self-Test: Not Supported 00:34:03.405 Directives: Not Supported 00:34:03.405 NVMe-MI: Not Supported 00:34:03.405 Virtualization Management: Not Supported 00:34:03.405 Doorbell Buffer Config: Not Supported 00:34:03.405 Get LBA Status Capability: Not Supported 00:34:03.405 Command & Feature Lockdown Capability: Not Supported 00:34:03.405 Abort Command Limit: 4 00:34:03.405 Async Event Request Limit: 4 00:34:03.405 Number of Firmware Slots: N/A 00:34:03.405 Firmware Slot 1 Read-Only: N/A 00:34:03.405 Firmware Activation Without Reset: N/A 00:34:03.405 Multiple Update Detection Support: N/A 00:34:03.405 Firmware Update Granularity: No Information Provided 00:34:03.405 Per-Namespace SMART Log: Yes 00:34:03.405 Asymmetric Namespace Access Log Page: Supported 00:34:03.405 ANA Transition Time : 10 sec 00:34:03.405 00:34:03.405 Asymmetric Namespace Access Capabilities 00:34:03.405 ANA Optimized State : Supported 00:34:03.405 ANA Non-Optimized State : Supported 00:34:03.405 ANA Inaccessible State : Supported 00:34:03.405 ANA Persistent Loss State : Supported 00:34:03.405 ANA Change State : Supported 00:34:03.405 ANAGRPID is not changed : No 00:34:03.405 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:03.405 00:34:03.405 ANA Group Identifier Maximum : 128 00:34:03.405 Number of ANA Group Identifiers : 128 00:34:03.405 Max Number of Allowed Namespaces : 1024 00:34:03.405 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:03.405 Command Effects Log Page: Supported 00:34:03.405 Get Log Page Extended Data: Supported 00:34:03.405 Telemetry Log Pages: Not Supported 00:34:03.405 Persistent Event Log Pages: Not Supported 00:34:03.405 Supported Log Pages Log Page: May Support 00:34:03.405 Commands Supported & Effects Log Page: Not Supported 00:34:03.405 Feature Identifiers & Effects Log Page:May Support 00:34:03.405 NVMe-MI Commands & Effects Log Page: May Support 00:34:03.405 Data Area 4 for Telemetry Log: Not Supported 00:34:03.405 Error Log Page Entries Supported: 128 00:34:03.405 Keep Alive: Supported 00:34:03.405 Keep Alive Granularity: 1000 ms 00:34:03.405 00:34:03.405 NVM Command Set Attributes 00:34:03.405 ========================== 00:34:03.405 Submission Queue Entry Size 00:34:03.405 Max: 64 00:34:03.405 Min: 64 00:34:03.405 Completion Queue Entry Size 00:34:03.405 Max: 16 00:34:03.405 Min: 16 00:34:03.405 Number of Namespaces: 1024 00:34:03.405 Compare Command: Not Supported 00:34:03.405 Write Uncorrectable Command: Not Supported 00:34:03.405 Dataset Management Command: Supported 00:34:03.405 Write Zeroes Command: Supported 00:34:03.405 Set Features Save Field: Not Supported 00:34:03.405 Reservations: Not Supported 00:34:03.405 Timestamp: Not Supported 00:34:03.405 Copy: Not Supported 00:34:03.405 Volatile Write Cache: Present 00:34:03.405 Atomic Write Unit (Normal): 1 00:34:03.405 Atomic Write Unit (PFail): 1 00:34:03.405 Atomic Compare & Write Unit: 1 00:34:03.405 Fused Compare & Write: Not Supported 00:34:03.405 Scatter-Gather List 00:34:03.405 SGL Command Set: Supported 00:34:03.405 SGL Keyed: Not Supported 00:34:03.405 SGL Bit Bucket Descriptor: Not Supported 00:34:03.405 SGL Metadata Pointer: Not Supported 00:34:03.405 Oversized SGL: Not Supported 00:34:03.405 SGL Metadata Address: Not Supported 00:34:03.405 SGL Offset: Supported 00:34:03.405 Transport SGL Data Block: Not Supported 00:34:03.405 Replay Protected Memory Block: Not Supported 00:34:03.405 00:34:03.405 Firmware Slot Information 00:34:03.405 ========================= 00:34:03.405 Active slot: 0 00:34:03.405 00:34:03.405 Asymmetric Namespace Access 00:34:03.405 =========================== 00:34:03.405 Change Count : 0 00:34:03.405 Number of ANA Group Descriptors : 1 00:34:03.405 ANA Group Descriptor : 0 00:34:03.405 ANA Group ID : 1 00:34:03.405 Number of NSID Values : 1 00:34:03.405 Change Count : 0 00:34:03.405 ANA State : 1 00:34:03.405 Namespace Identifier : 1 00:34:03.405 00:34:03.405 Commands Supported and Effects 00:34:03.405 ============================== 00:34:03.405 Admin Commands 00:34:03.405 -------------- 00:34:03.405 Get Log Page (02h): Supported 00:34:03.405 Identify (06h): Supported 00:34:03.405 Abort (08h): Supported 00:34:03.405 Set Features (09h): Supported 00:34:03.405 Get Features (0Ah): Supported 00:34:03.405 Asynchronous Event Request (0Ch): Supported 00:34:03.405 Keep Alive (18h): Supported 00:34:03.405 I/O Commands 00:34:03.405 ------------ 00:34:03.405 Flush (00h): Supported 00:34:03.405 Write (01h): Supported LBA-Change 00:34:03.405 Read (02h): Supported 00:34:03.405 Write Zeroes (08h): Supported LBA-Change 00:34:03.405 Dataset Management (09h): Supported 00:34:03.405 00:34:03.405 Error Log 00:34:03.405 ========= 00:34:03.405 Entry: 0 00:34:03.405 Error Count: 0x3 00:34:03.405 Submission Queue Id: 0x0 00:34:03.405 Command Id: 0x5 00:34:03.405 Phase Bit: 0 00:34:03.405 Status Code: 0x2 00:34:03.405 Status Code Type: 0x0 00:34:03.405 Do Not Retry: 1 00:34:03.405 Error Location: 0x28 00:34:03.405 LBA: 0x0 00:34:03.405 Namespace: 0x0 00:34:03.405 Vendor Log Page: 0x0 00:34:03.405 ----------- 00:34:03.405 Entry: 1 00:34:03.405 Error Count: 0x2 00:34:03.405 Submission Queue Id: 0x0 00:34:03.405 Command Id: 0x5 00:34:03.405 Phase Bit: 0 00:34:03.405 Status Code: 0x2 00:34:03.405 Status Code Type: 0x0 00:34:03.405 Do Not Retry: 1 00:34:03.405 Error Location: 0x28 00:34:03.405 LBA: 0x0 00:34:03.405 Namespace: 0x0 00:34:03.405 Vendor Log Page: 0x0 00:34:03.405 ----------- 00:34:03.405 Entry: 2 00:34:03.405 Error Count: 0x1 00:34:03.405 Submission Queue Id: 0x0 00:34:03.405 Command Id: 0x4 00:34:03.405 Phase Bit: 0 00:34:03.405 Status Code: 0x2 00:34:03.405 Status Code Type: 0x0 00:34:03.405 Do Not Retry: 1 00:34:03.405 Error Location: 0x28 00:34:03.405 LBA: 0x0 00:34:03.405 Namespace: 0x0 00:34:03.405 Vendor Log Page: 0x0 00:34:03.405 00:34:03.405 Number of Queues 00:34:03.406 ================ 00:34:03.406 Number of I/O Submission Queues: 128 00:34:03.406 Number of I/O Completion Queues: 128 00:34:03.406 00:34:03.406 ZNS Specific Controller Data 00:34:03.406 ============================ 00:34:03.406 Zone Append Size Limit: 0 00:34:03.406 00:34:03.406 00:34:03.406 Active Namespaces 00:34:03.406 ================= 00:34:03.406 get_feature(0x05) failed 00:34:03.406 Namespace ID:1 00:34:03.406 Command Set Identifier: NVM (00h) 00:34:03.406 Deallocate: Supported 00:34:03.406 Deallocated/Unwritten Error: Not Supported 00:34:03.406 Deallocated Read Value: Unknown 00:34:03.406 Deallocate in Write Zeroes: Not Supported 00:34:03.406 Deallocated Guard Field: 0xFFFF 00:34:03.406 Flush: Supported 00:34:03.406 Reservation: Not Supported 00:34:03.406 Namespace Sharing Capabilities: Multiple Controllers 00:34:03.406 Size (in LBAs): 1953525168 (931GiB) 00:34:03.406 Capacity (in LBAs): 1953525168 (931GiB) 00:34:03.406 Utilization (in LBAs): 1953525168 (931GiB) 00:34:03.406 UUID: 5ff400c3-e07e-4e08-81da-c3c460ecff7b 00:34:03.406 Thin Provisioning: Not Supported 00:34:03.406 Per-NS Atomic Units: Yes 00:34:03.406 Atomic Boundary Size (Normal): 0 00:34:03.406 Atomic Boundary Size (PFail): 0 00:34:03.406 Atomic Boundary Offset: 0 00:34:03.406 NGUID/EUI64 Never Reused: No 00:34:03.406 ANA group ID: 1 00:34:03.406 Namespace Write Protected: No 00:34:03.406 Number of LBA Formats: 1 00:34:03.406 Current LBA Format: LBA Format #00 00:34:03.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:03.406 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:03.406 rmmod nvme_tcp 00:34:03.406 rmmod nvme_fabrics 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:03.406 12:23:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.308 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:05.308 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:05.308 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:05.308 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:34:05.566 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:05.566 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:05.566 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:05.566 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:05.566 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:05.566 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:05.566 12:23:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:08.102 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:08.102 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:08.102 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:08.360 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:09.297 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:09.297 00:34:09.297 real 0m16.188s 00:34:09.297 user 0m4.014s 00:34:09.297 sys 0m8.516s 00:34:09.297 12:23:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:09.297 12:23:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.297 ************************************ 00:34:09.297 END TEST nvmf_identify_kernel_target 00:34:09.297 ************************************ 00:34:09.297 12:23:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:09.297 12:23:59 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:09.297 12:23:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:09.297 12:23:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:09.297 12:23:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:09.297 ************************************ 00:34:09.297 START TEST nvmf_auth_host 00:34:09.297 ************************************ 00:34:09.297 12:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:09.556 * Looking for test storage... 00:34:09.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.556 12:23:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:09.557 12:23:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.829 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:14.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:14.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:14.830 Found net devices under 0000:86:00.0: cvl_0_0 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:14.830 Found net devices under 0000:86:00.1: cvl_0_1 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.830 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.089 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.089 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.089 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:15.089 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.089 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.089 12:24:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:15.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:34:15.089 00:34:15.089 --- 10.0.0.2 ping statistics --- 00:34:15.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.089 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:34:15.089 00:34:15.089 --- 10.0.0.1 ping statistics --- 00:34:15.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.089 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1339326 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1339326 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1339326 ']' 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.089 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:15.090 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fa38afa5ec3897d87ec9cea3d9729fd4 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dPq 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fa38afa5ec3897d87ec9cea3d9729fd4 0 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fa38afa5ec3897d87ec9cea3d9729fd4 0 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fa38afa5ec3897d87ec9cea3d9729fd4 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:15.371 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dPq 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dPq 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.dPq 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f7cc4e94907fd491e1bb8f774b3f6532fcbc48ee9f7892bb1e86158f1e137d78 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7qa 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f7cc4e94907fd491e1bb8f774b3f6532fcbc48ee9f7892bb1e86158f1e137d78 3 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f7cc4e94907fd491e1bb8f774b3f6532fcbc48ee9f7892bb1e86158f1e137d78 3 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f7cc4e94907fd491e1bb8f774b3f6532fcbc48ee9f7892bb1e86158f1e137d78 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7qa 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7qa 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7qa 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:15.629 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c54dbc3d5ec6c0dd3b1a04e4b6dc228c6d69f279b4c071bf 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rHQ 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c54dbc3d5ec6c0dd3b1a04e4b6dc228c6d69f279b4c071bf 0 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c54dbc3d5ec6c0dd3b1a04e4b6dc228c6d69f279b4c071bf 0 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c54dbc3d5ec6c0dd3b1a04e4b6dc228c6d69f279b4c071bf 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rHQ 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rHQ 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.rHQ 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4c7c430b586067a6e8d1433cd8ba5b7361bae38bcf5f03fb 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Yxf 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4c7c430b586067a6e8d1433cd8ba5b7361bae38bcf5f03fb 2 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4c7c430b586067a6e8d1433cd8ba5b7361bae38bcf5f03fb 2 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4c7c430b586067a6e8d1433cd8ba5b7361bae38bcf5f03fb 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Yxf 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Yxf 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Yxf 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b47e6f3877a67e0f00d70b04cd834e32 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VsW 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b47e6f3877a67e0f00d70b04cd834e32 1 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b47e6f3877a67e0f00d70b04cd834e32 1 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b47e6f3877a67e0f00d70b04cd834e32 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VsW 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VsW 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.VsW 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:15.630 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=13f9a8df558e5f2786415bcfb955f76e 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HdI 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 13f9a8df558e5f2786415bcfb955f76e 1 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 13f9a8df558e5f2786415bcfb955f76e 1 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=13f9a8df558e5f2786415bcfb955f76e 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HdI 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HdI 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.HdI 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8773bee7572c31a4b448ca05b4f5e4821e86f99a745a1898 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Sto 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8773bee7572c31a4b448ca05b4f5e4821e86f99a745a1898 2 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8773bee7572c31a4b448ca05b4f5e4821e86f99a745a1898 2 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8773bee7572c31a4b448ca05b4f5e4821e86f99a745a1898 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Sto 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Sto 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Sto 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:15.888 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=36b6bfc4844f96e3ef9f1eb7c236ede4 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kmX 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 36b6bfc4844f96e3ef9f1eb7c236ede4 0 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 36b6bfc4844f96e3ef9f1eb7c236ede4 0 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=36b6bfc4844f96e3ef9f1eb7c236ede4 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kmX 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kmX 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kmX 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8cc295591f9adb5bd00041a31b1ffddb3373b6a7fc80126b72a5597fc583ab54 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.c8g 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8cc295591f9adb5bd00041a31b1ffddb3373b6a7fc80126b72a5597fc583ab54 3 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8cc295591f9adb5bd00041a31b1ffddb3373b6a7fc80126b72a5597fc583ab54 3 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8cc295591f9adb5bd00041a31b1ffddb3373b6a7fc80126b72a5597fc583ab54 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.c8g 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.c8g 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.c8g 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1339326 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1339326 ']' 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:15.889 12:24:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dPq 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7qa ]] 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7qa 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.rHQ 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.147 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Yxf ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yxf 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.VsW 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.HdI ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HdI 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Sto 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kmX ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kmX 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.c8g 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:16.148 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:16.406 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:16.406 12:24:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:18.939 Waiting for block devices as requested 00:34:18.939 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:18.939 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:19.252 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:19.252 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:19.252 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:19.252 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:19.252 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:19.517 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:19.517 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:19.517 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:19.517 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:19.775 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:19.775 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:19.775 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:20.033 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:20.033 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:20.033 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:20.597 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:20.597 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:20.597 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:20.597 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:20.597 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:20.597 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:20.597 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:20.597 12:24:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:20.597 12:24:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:20.597 No valid GPT data, bailing 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:20.854 00:34:20.854 Discovery Log Number of Records 2, Generation counter 2 00:34:20.854 =====Discovery Log Entry 0====== 00:34:20.854 trtype: tcp 00:34:20.854 adrfam: ipv4 00:34:20.854 subtype: current discovery subsystem 00:34:20.854 treq: not specified, sq flow control disable supported 00:34:20.854 portid: 1 00:34:20.854 trsvcid: 4420 00:34:20.854 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:20.854 traddr: 10.0.0.1 00:34:20.854 eflags: none 00:34:20.854 sectype: none 00:34:20.854 =====Discovery Log Entry 1====== 00:34:20.854 trtype: tcp 00:34:20.854 adrfam: ipv4 00:34:20.854 subtype: nvme subsystem 00:34:20.854 treq: not specified, sq flow control disable supported 00:34:20.854 portid: 1 00:34:20.854 trsvcid: 4420 00:34:20.854 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:20.854 traddr: 10.0.0.1 00:34:20.854 eflags: none 00:34:20.854 sectype: none 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.854 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.112 nvme0n1 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.112 12:24:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.368 nvme0n1 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.368 nvme0n1 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.368 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.670 nvme0n1 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.670 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.927 nvme0n1 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:21.927 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.928 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.185 nvme0n1 00:34:22.185 12:24:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.185 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.185 12:24:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.185 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.443 nvme0n1 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.443 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.700 nvme0n1 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.700 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.701 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.701 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.701 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.701 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.701 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.701 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.701 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.958 nvme0n1 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.958 12:24:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.215 nvme0n1 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.215 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.472 nvme0n1 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.472 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.473 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.730 nvme0n1 00:34:23.730 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.730 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.731 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.988 nvme0n1 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.988 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.245 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.245 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.245 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.245 12:24:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.245 12:24:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:24.245 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.245 12:24:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.245 nvme0n1 00:34:24.246 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.246 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.246 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.246 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.246 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.503 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.761 nvme0n1 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.761 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.762 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.020 nvme0n1 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:25.020 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.021 12:24:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.587 nvme0n1 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.587 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.845 nvme0n1 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.845 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.103 12:24:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.361 nvme0n1 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.361 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.924 nvme0n1 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.924 12:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.181 nvme0n1 00:34:27.181 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.181 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.181 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.181 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.181 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.439 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.005 nvme0n1 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.005 12:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.571 nvme0n1 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.571 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.572 12:24:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.137 nvme0n1 00:34:29.137 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.137 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.137 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.137 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.137 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.137 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.395 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.960 nvme0n1 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.960 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.961 12:24:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.526 nvme0n1 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:30.526 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.527 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.784 nvme0n1 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.784 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.042 nvme0n1 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.042 12:24:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.300 nvme0n1 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.300 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.558 nvme0n1 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.558 nvme0n1 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.558 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.816 nvme0n1 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.816 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.074 12:24:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.074 nvme0n1 00:34:32.074 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.074 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.074 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.074 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.074 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.332 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.333 nvme0n1 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.333 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.590 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.859 nvme0n1 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.859 nvme0n1 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.859 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.138 12:24:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.397 nvme0n1 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.397 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.398 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.398 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.655 nvme0n1 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:33.655 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.656 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.914 nvme0n1 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.914 12:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.171 nvme0n1 00:34:34.171 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.171 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.171 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.171 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.171 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.171 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.428 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.428 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.429 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.687 nvme0n1 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.687 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.944 nvme0n1 00:34:34.944 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.944 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.944 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.944 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.944 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.944 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.202 12:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.460 nvme0n1 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.460 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.025 nvme0n1 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:36.025 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.026 12:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.284 nvme0n1 00:34:36.284 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.284 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.284 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.284 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.284 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.284 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.542 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.800 nvme0n1 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:36.800 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.801 12:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.366 nvme0n1 00:34:37.366 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.366 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.366 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.366 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.366 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.623 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.624 12:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.189 nvme0n1 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.189 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.752 nvme0n1 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.752 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.010 12:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.598 nvme0n1 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.598 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.164 nvme0n1 00:34:40.164 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.164 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.164 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.164 12:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.164 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.164 12:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.164 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.422 nvme0n1 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.422 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.679 nvme0n1 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:40.679 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.680 nvme0n1 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.680 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.936 nvme0n1 00:34:40.936 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.937 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.937 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.937 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.937 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.937 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.937 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.937 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.937 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.937 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.195 12:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.195 nvme0n1 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:41.195 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.196 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.453 nvme0n1 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.453 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.725 nvme0n1 00:34:41.725 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.725 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.725 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.725 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.725 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.725 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.725 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.725 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.725 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.726 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.984 nvme0n1 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.984 12:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.241 nvme0n1 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.241 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.242 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.242 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.242 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.242 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.242 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.242 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.242 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.499 nvme0n1 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.499 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.500 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.500 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.500 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.500 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.500 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.500 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.500 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.500 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.500 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.758 nvme0n1 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.758 12:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.016 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.016 12:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.016 nvme0n1 00:34:43.016 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.016 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.016 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.016 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.016 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.016 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.274 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.530 nvme0n1 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:43.530 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.531 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.787 nvme0n1 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.787 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.044 nvme0n1 00:34:44.044 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.044 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.044 12:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.044 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.044 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.044 12:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.044 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.302 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.560 nvme0n1 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.560 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.561 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.127 nvme0n1 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.127 12:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.385 nvme0n1 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.385 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.642 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.642 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:45.642 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.642 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.643 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.900 nvme0n1 00:34:45.900 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.900 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.900 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.900 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.900 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.900 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.900 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.900 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.901 12:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.473 nvme0n1 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmEzOGFmYTVlYzM4OTdkODdlYzljZWEzZDk3MjlmZDRNwW8u: 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: ]] 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjdjYzRlOTQ5MDdmZDQ5MWUxYmI4Zjc3NGIzZjY1MzJmY2JjNDhlZTlmNzg5MmJiMWU4NjE1OGYxZTEzN2Q3OJriSZc=: 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.473 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.474 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.085 nvme0n1 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.085 12:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.649 nvme0n1 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjQ3ZTZmMzg3N2E2N2UwZjAwZDcwYjA0Y2Q4MzRlMzJuin4x: 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: ]] 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNmOWE4ZGY1NThlNWYyNzg2NDE1YmNmYjk1NWY3NmVqQGDg: 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.649 12:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.650 12:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:47.650 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.650 12:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.214 nvme0n1 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.214 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODc3M2JlZTc1NzJjMzFhNGI0NDhjYTA1YjRmNWU0ODIxZTg2Zjk5YTc0NWExODk4bGDCTQ==: 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: ]] 00:34:48.472 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzZiNmJmYzQ4NDRmOTZlM2VmOWYxZWI3YzIzNmVkZTQpeMJd: 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:48.473 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.039 nvme0n1 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjMjk1NTkxZjlhZGI1YmQwMDA0MWEzMWIxZmZkZGIzMzczYjZhN2ZjODAxMjZiNzJhNTU5N2ZjNTgzYWI1NCEFg4Y=: 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.039 12:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.604 nvme0n1 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzU0ZGJjM2Q1ZWM2YzBkZDNiMWEwNGU0YjZkYzIyOGM2ZDY5ZjI3OWI0YzA3MWJmN5UmPA==: 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: ]] 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGM3YzQzMGI1ODYwNjdhNmU4ZDE0MzNjZDhiYTViNzM2MWJhZTM4YmNmNWYwM2ZigZ0Bsw==: 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.604 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.862 request: 00:34:49.862 { 00:34:49.862 "name": "nvme0", 00:34:49.862 "trtype": "tcp", 00:34:49.862 "traddr": "10.0.0.1", 00:34:49.862 "adrfam": "ipv4", 00:34:49.862 "trsvcid": "4420", 00:34:49.862 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:49.862 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:49.862 "prchk_reftag": false, 00:34:49.862 "prchk_guard": false, 00:34:49.862 "hdgst": false, 00:34:49.862 "ddgst": false, 00:34:49.862 "method": "bdev_nvme_attach_controller", 00:34:49.862 "req_id": 1 00:34:49.862 } 00:34:49.862 Got JSON-RPC error response 00:34:49.862 response: 00:34:49.862 { 00:34:49.862 "code": -5, 00:34:49.862 "message": "Input/output error" 00:34:49.862 } 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.862 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.862 request: 00:34:49.862 { 00:34:49.862 "name": "nvme0", 00:34:49.862 "trtype": "tcp", 00:34:49.862 "traddr": "10.0.0.1", 00:34:49.862 "adrfam": "ipv4", 00:34:49.862 "trsvcid": "4420", 00:34:49.862 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:49.862 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:49.862 "prchk_reftag": false, 00:34:49.862 "prchk_guard": false, 00:34:49.862 "hdgst": false, 00:34:49.862 "ddgst": false, 00:34:49.862 "dhchap_key": "key2", 00:34:49.862 "method": "bdev_nvme_attach_controller", 00:34:49.862 "req_id": 1 00:34:49.862 } 00:34:49.863 Got JSON-RPC error response 00:34:49.863 response: 00:34:49.863 { 00:34:49.863 "code": -5, 00:34:49.863 "message": "Input/output error" 00:34:49.863 } 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.863 request: 00:34:49.863 { 00:34:49.863 "name": "nvme0", 00:34:49.863 "trtype": "tcp", 00:34:49.863 "traddr": "10.0.0.1", 00:34:49.863 "adrfam": "ipv4", 00:34:49.863 "trsvcid": "4420", 00:34:49.863 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:49.863 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:49.863 "prchk_reftag": false, 00:34:49.863 "prchk_guard": false, 00:34:49.863 "hdgst": false, 00:34:49.863 "ddgst": false, 00:34:49.863 "dhchap_key": "key1", 00:34:49.863 "dhchap_ctrlr_key": "ckey2", 00:34:49.863 "method": "bdev_nvme_attach_controller", 00:34:49.863 "req_id": 1 00:34:49.863 } 00:34:49.863 Got JSON-RPC error response 00:34:49.863 response: 00:34:49.863 { 00:34:49.863 "code": -5, 00:34:49.863 "message": "Input/output error" 00:34:49.863 } 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:49.863 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:50.121 rmmod nvme_tcp 00:34:50.121 rmmod nvme_fabrics 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1339326 ']' 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1339326 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1339326 ']' 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1339326 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1339326 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1339326' 00:34:50.121 killing process with pid 1339326 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1339326 00:34:50.121 12:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1339326 00:34:50.121 12:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:50.121 12:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:50.121 12:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:50.121 12:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:50.121 12:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:50.121 12:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.121 12:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:50.121 12:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:52.652 12:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:55.184 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:55.184 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:56.119 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:56.119 12:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dPq /tmp/spdk.key-null.rHQ /tmp/spdk.key-sha256.VsW /tmp/spdk.key-sha384.Sto /tmp/spdk.key-sha512.c8g /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:56.119 12:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:58.683 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:58.683 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:58.683 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:58.941 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:58.941 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:58.941 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:58.941 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:58.941 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:58.941 00:34:58.941 real 0m49.605s 00:34:58.941 user 0m44.179s 00:34:58.941 sys 0m12.125s 00:34:58.941 12:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:58.941 12:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.941 ************************************ 00:34:58.941 END TEST nvmf_auth_host 00:34:58.941 ************************************ 00:34:58.941 12:24:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:58.941 12:24:48 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:34:58.941 12:24:48 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:58.941 12:24:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:58.941 12:24:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:58.941 12:24:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.941 ************************************ 00:34:58.941 START TEST nvmf_digest 00:34:58.941 ************************************ 00:34:58.941 12:24:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:59.200 * Looking for test storage... 00:34:59.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.200 12:24:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:59.200 12:24:49 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:34:59.201 12:24:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:05.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:05.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:05.770 Found net devices under 0000:86:00.0: cvl_0_0 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:05.770 Found net devices under 0000:86:00.1: cvl_0_1 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:05.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:35:05.770 00:35:05.770 --- 10.0.0.2 ping statistics --- 00:35:05.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.770 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:05.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:35:05.770 00:35:05.770 --- 10.0.0.1 ping statistics --- 00:35:05.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.770 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:05.770 12:24:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:05.771 ************************************ 00:35:05.771 START TEST nvmf_digest_clean 00:35:05.771 ************************************ 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1352754 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1352754 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1352754 ']' 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.771 [2024-07-15 12:24:54.856326] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:05.771 [2024-07-15 12:24:54.856370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.771 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.771 [2024-07-15 12:24:54.928027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.771 [2024-07-15 12:24:54.968156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.771 [2024-07-15 12:24:54.968199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.771 [2024-07-15 12:24:54.968206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.771 [2024-07-15 12:24:54.968212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.771 [2024-07-15 12:24:54.968217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.771 [2024-07-15 12:24:54.968254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:05.771 12:24:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.771 null0 00:35:05.771 [2024-07-15 12:24:55.109365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.771 [2024-07-15 12:24:55.133537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1352911 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1352911 /var/tmp/bperf.sock 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1352911 ']' 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.771 [2024-07-15 12:24:55.185410] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:05.771 [2024-07-15 12:24:55.185461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352911 ] 00:35:05.771 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.771 [2024-07-15 12:24:55.254352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.771 [2024-07-15 12:24:55.295300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.771 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:06.029 nvme0n1 00:35:06.029 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:06.029 12:24:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:06.288 Running I/O for 2 seconds... 00:35:08.187 00:35:08.187 Latency(us) 00:35:08.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.187 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:08.187 nvme0n1 : 2.01 25840.19 100.94 0.00 0.00 4948.90 2336.50 11511.54 00:35:08.187 =================================================================================================================== 00:35:08.187 Total : 25840.19 100.94 0.00 0.00 4948.90 2336.50 11511.54 00:35:08.187 0 00:35:08.187 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:08.187 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:08.187 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:08.187 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:08.187 | select(.opcode=="crc32c") 00:35:08.187 | "\(.module_name) \(.executed)"' 00:35:08.187 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1352911 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1352911 ']' 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1352911 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1352911 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1352911' 00:35:08.476 killing process with pid 1352911 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1352911 00:35:08.476 Received shutdown signal, test time was about 2.000000 seconds 00:35:08.476 00:35:08.476 Latency(us) 00:35:08.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.476 =================================================================================================================== 00:35:08.476 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1352911 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:08.476 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1353469 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1353469 /var/tmp/bperf.sock 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1353469 ']' 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.734 [2024-07-15 12:24:58.519367] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:08.734 [2024-07-15 12:24:58.519416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353469 ] 00:35:08.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:08.734 Zero copy mechanism will not be used. 00:35:08.734 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.734 [2024-07-15 12:24:58.587674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.734 [2024-07-15 12:24:58.628345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:08.734 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:08.992 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:08.992 12:24:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.562 nvme0n1 00:35:09.562 12:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:09.562 12:24:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.562 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:09.562 Zero copy mechanism will not be used. 00:35:09.562 Running I/O for 2 seconds... 00:35:11.463 00:35:11.463 Latency(us) 00:35:11.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.463 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:11.463 nvme0n1 : 2.00 5049.58 631.20 0.00 0.00 3165.94 968.79 5442.34 00:35:11.463 =================================================================================================================== 00:35:11.463 Total : 5049.58 631.20 0.00 0.00 3165.94 968.79 5442.34 00:35:11.463 0 00:35:11.463 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:11.463 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:11.463 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:11.463 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:11.463 | select(.opcode=="crc32c") 00:35:11.463 | "\(.module_name) \(.executed)"' 00:35:11.463 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1353469 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1353469 ']' 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1353469 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1353469 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1353469' 00:35:11.721 killing process with pid 1353469 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1353469 00:35:11.721 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.721 00:35:11.721 Latency(us) 00:35:11.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.721 =================================================================================================================== 00:35:11.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.721 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1353469 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1353940 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1353940 /var/tmp/bperf.sock 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1353940 ']' 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:11.979 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:11.979 [2024-07-15 12:25:01.843463] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:11.979 [2024-07-15 12:25:01.843508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353940 ] 00:35:11.979 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.979 [2024-07-15 12:25:01.912540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.979 [2024-07-15 12:25:01.953076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.237 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:12.237 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:12.237 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:12.237 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:12.237 12:25:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:12.237 12:25:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.237 12:25:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.804 nvme0n1 00:35:12.804 12:25:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:12.804 12:25:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.804 Running I/O for 2 seconds... 00:35:14.701 00:35:14.701 Latency(us) 00:35:14.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.701 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:14.701 nvme0n1 : 2.00 28436.46 111.08 0.00 0.00 4495.41 2407.74 11568.53 00:35:14.701 =================================================================================================================== 00:35:14.701 Total : 28436.46 111.08 0.00 0.00 4495.41 2407.74 11568.53 00:35:14.701 0 00:35:14.701 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:14.701 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:14.701 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:14.701 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:14.701 | select(.opcode=="crc32c") 00:35:14.701 | "\(.module_name) \(.executed)"' 00:35:14.701 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1353940 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1353940 ']' 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1353940 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1353940 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:14.960 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1353940' 00:35:14.961 killing process with pid 1353940 00:35:14.961 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1353940 00:35:14.961 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.961 00:35:14.961 Latency(us) 00:35:14.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.961 =================================================================================================================== 00:35:14.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.961 12:25:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1353940 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1354443 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1354443 /var/tmp/bperf.sock 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1354443 ']' 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:15.220 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:15.220 [2024-07-15 12:25:05.140692] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:15.220 [2024-07-15 12:25:05.140741] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354443 ] 00:35:15.220 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.220 Zero copy mechanism will not be used. 00:35:15.220 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.220 [2024-07-15 12:25:05.207877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.478 [2024-07-15 12:25:05.246849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.478 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:15.478 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:15.478 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:15.478 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:15.478 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:15.736 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.736 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.995 nvme0n1 00:35:15.995 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:15.995 12:25:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.995 Zero copy mechanism will not be used. 00:35:15.995 Running I/O for 2 seconds... 00:35:18.524 00:35:18.524 Latency(us) 00:35:18.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.524 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:18.524 nvme0n1 : 2.00 6291.48 786.43 0.00 0.00 2538.95 1716.76 12024.43 00:35:18.524 =================================================================================================================== 00:35:18.524 Total : 6291.48 786.43 0.00 0.00 2538.95 1716.76 12024.43 00:35:18.524 0 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:18.524 | select(.opcode=="crc32c") 00:35:18.524 | "\(.module_name) \(.executed)"' 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1354443 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1354443 ']' 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1354443 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1354443 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1354443' 00:35:18.524 killing process with pid 1354443 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1354443 00:35:18.524 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.524 00:35:18.524 Latency(us) 00:35:18.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.524 =================================================================================================================== 00:35:18.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1354443 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1352754 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1352754 ']' 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1352754 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1352754 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1352754' 00:35:18.524 killing process with pid 1352754 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1352754 00:35:18.524 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1352754 00:35:18.782 00:35:18.782 real 0m13.846s 00:35:18.782 user 0m26.369s 00:35:18.782 sys 0m4.329s 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:18.782 ************************************ 00:35:18.782 END TEST nvmf_digest_clean 00:35:18.782 ************************************ 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:18.782 ************************************ 00:35:18.782 START TEST nvmf_digest_error 00:35:18.782 ************************************ 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.782 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1355127 00:35:18.783 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1355127 00:35:18.783 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:18.783 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1355127 ']' 00:35:18.783 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.783 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:18.783 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.783 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:18.783 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.783 [2024-07-15 12:25:08.770324] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:18.783 [2024-07-15 12:25:08.770363] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.041 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.041 [2024-07-15 12:25:08.840316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.041 [2024-07-15 12:25:08.880114] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.041 [2024-07-15 12:25:08.880152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.041 [2024-07-15 12:25:08.880158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.041 [2024-07-15 12:25:08.880165] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.041 [2024-07-15 12:25:08.880170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.041 [2024-07-15 12:25:08.880187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.041 [2024-07-15 12:25:08.940601] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.041 12:25:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.041 null0 00:35:19.041 [2024-07-15 12:25:09.024367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.299 [2024-07-15 12:25:09.048538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1355148 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1355148 /var/tmp/bperf.sock 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1355148 ']' 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:19.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.299 [2024-07-15 12:25:09.100074] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:19.299 [2024-07-15 12:25:09.100114] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355148 ] 00:35:19.299 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.299 [2024-07-15 12:25:09.168416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.299 [2024-07-15 12:25:09.209142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:19.299 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:19.557 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:19.558 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.558 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.558 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.558 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.558 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.815 nvme0n1 00:35:19.815 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:19.815 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.815 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.074 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.074 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:20.074 12:25:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:20.074 Running I/O for 2 seconds... 00:35:20.074 [2024-07-15 12:25:09.920437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:09.920469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:09.920480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:09.931469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:09.931494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:09.931503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:09.940079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:09.940100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:09.940109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:09.949351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:09.949374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:09.949383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:09.960306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:09.960328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:09.960337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:09.968870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:09.968892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:09.968900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:09.979052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:09.979073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:09.979081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:09.987246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:09.987267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:09.987275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:09.996831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:09.996852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:09.996861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:10.009081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:10.009141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:10.009153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:10.020952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:10.020975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:10.020985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:10.029010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:10.029032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:10.029041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:10.040459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:10.040481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:10.040494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:10.052631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:10.052653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:10.052662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.074 [2024-07-15 12:25:10.064118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.074 [2024-07-15 12:25:10.064142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.074 [2024-07-15 12:25:10.064152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.075974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.075997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.076007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.084719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.084741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.084750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.097353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.097375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.097383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.105663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.105683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.105692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.117689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.117710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.117719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.129862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.129883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.129891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.139086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.139108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.139116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.151560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.151581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.151589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.159937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.159958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.159966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.332 [2024-07-15 12:25:10.171991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.332 [2024-07-15 12:25:10.172012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.332 [2024-07-15 12:25:10.172020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.180378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.180399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.180407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.192143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.192168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.192178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.204388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.204409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.204417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.212823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.212843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.212852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.224849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.224870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.224882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.233572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.233593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.233602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.245421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.245444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.245453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.256663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.256683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.256691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.265417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.265438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.265446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.276910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.276930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.276938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.286305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.286326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.286334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.294925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.294946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.294954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.305568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.305589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.305597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.316496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.316520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.316528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.333 [2024-07-15 12:25:10.325518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.333 [2024-07-15 12:25:10.325539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.333 [2024-07-15 12:25:10.325547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.336361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.336384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.336392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.348890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.348912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.348920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.360697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.360719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.360728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.369331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.369352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.369361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.379787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.379809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.379817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.389759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.389781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.389790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.399320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.399341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.399350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.408992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.409013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.409021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.417895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.417916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.417925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.427967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.427987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.427995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.437851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.437872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.437880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.447027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.447049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.447057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.456665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.456686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.456694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.467663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.467684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.467692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.476333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.476354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.476363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.486867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.486889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.486901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.495448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.495469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.495477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.504918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.504938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.592 [2024-07-15 12:25:10.504946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.592 [2024-07-15 12:25:10.515162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.592 [2024-07-15 12:25:10.515184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.593 [2024-07-15 12:25:10.515192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.593 [2024-07-15 12:25:10.526133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.593 [2024-07-15 12:25:10.526155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.593 [2024-07-15 12:25:10.526163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.593 [2024-07-15 12:25:10.534472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.593 [2024-07-15 12:25:10.534492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.593 [2024-07-15 12:25:10.534500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.593 [2024-07-15 12:25:10.546470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.593 [2024-07-15 12:25:10.546490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.593 [2024-07-15 12:25:10.546498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.593 [2024-07-15 12:25:10.556182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.593 [2024-07-15 12:25:10.556202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.593 [2024-07-15 12:25:10.556210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.593 [2024-07-15 12:25:10.564657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.593 [2024-07-15 12:25:10.564676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.593 [2024-07-15 12:25:10.564685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.593 [2024-07-15 12:25:10.574978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.593 [2024-07-15 12:25:10.575002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.593 [2024-07-15 12:25:10.575011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.593 [2024-07-15 12:25:10.584470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.593 [2024-07-15 12:25:10.584493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.593 [2024-07-15 12:25:10.584501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.593736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.593759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.593768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.603685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.603707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.603715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.612278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.612300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.612308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.623312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.623332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.623341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.631458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.631478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.631486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.641486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.641507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.641516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.651057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.651077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.651089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.661461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.661482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.661491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.670092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.670112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.670120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.681625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.681645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.681653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.693288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.693309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.693317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.702954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.851 [2024-07-15 12:25:10.702973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.851 [2024-07-15 12:25:10.702981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.851 [2024-07-15 12:25:10.712501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.712521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.712530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.720890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.720911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.720920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.733265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.733286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.733294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.741940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.741964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.741972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.753614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.753636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.753644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.763108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.763129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.763137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.771164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.771184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.771193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.781798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.781819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.781827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.791467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.791488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.791497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.800960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.800980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.800988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.810345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.810368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.810376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.821028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.821049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.821058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.829397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.829418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.829427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.839158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.839180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.839188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.852 [2024-07-15 12:25:10.848653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:20.852 [2024-07-15 12:25:10.848679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.852 [2024-07-15 12:25:10.848688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.857744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.857769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.857777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.866451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.866475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.866483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.877291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.877313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.877321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.888156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.888178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.888187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.896040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.896060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.896068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.907519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.907540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.907551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.916031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.916052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.916060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.926657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.926678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.926686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.936646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.936667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.936674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.945135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.945156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.945164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.955494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.955515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.955523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.964883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.964903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.964911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.973672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.973693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.973701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.983419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.983440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.983448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:10.994853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:10.994879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:10.994887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:11.006134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:11.006155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:11.006163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:11.014957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:11.014978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:11.014986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:11.024200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.110 [2024-07-15 12:25:11.024221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.110 [2024-07-15 12:25:11.024236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.110 [2024-07-15 12:25:11.033213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.111 [2024-07-15 12:25:11.033238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.111 [2024-07-15 12:25:11.033247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.111 [2024-07-15 12:25:11.042410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.111 [2024-07-15 12:25:11.042431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.111 [2024-07-15 12:25:11.042439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.111 [2024-07-15 12:25:11.052203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.111 [2024-07-15 12:25:11.052231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.111 [2024-07-15 12:25:11.052241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.111 [2024-07-15 12:25:11.060904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.111 [2024-07-15 12:25:11.060926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.111 [2024-07-15 12:25:11.060934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.111 [2024-07-15 12:25:11.070779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.111 [2024-07-15 12:25:11.070801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.111 [2024-07-15 12:25:11.070810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.111 [2024-07-15 12:25:11.081936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.111 [2024-07-15 12:25:11.081958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.111 [2024-07-15 12:25:11.081966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.111 [2024-07-15 12:25:11.090295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.111 [2024-07-15 12:25:11.090316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.111 [2024-07-15 12:25:11.090323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.111 [2024-07-15 12:25:11.099605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.111 [2024-07-15 12:25:11.099626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.111 [2024-07-15 12:25:11.099635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.368 [2024-07-15 12:25:11.109338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.368 [2024-07-15 12:25:11.109362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.368 [2024-07-15 12:25:11.109371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.368 [2024-07-15 12:25:11.118893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.368 [2024-07-15 12:25:11.118916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.368 [2024-07-15 12:25:11.118924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.368 [2024-07-15 12:25:11.128688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.368 [2024-07-15 12:25:11.128709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.368 [2024-07-15 12:25:11.128717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.368 [2024-07-15 12:25:11.137149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.368 [2024-07-15 12:25:11.137170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.368 [2024-07-15 12:25:11.137178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.368 [2024-07-15 12:25:11.147393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.368 [2024-07-15 12:25:11.147414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.368 [2024-07-15 12:25:11.147422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.368 [2024-07-15 12:25:11.156101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.368 [2024-07-15 12:25:11.156127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.368 [2024-07-15 12:25:11.156135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.368 [2024-07-15 12:25:11.166201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.368 [2024-07-15 12:25:11.166222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.368 [2024-07-15 12:25:11.166236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.368 [2024-07-15 12:25:11.175120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.368 [2024-07-15 12:25:11.175140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.175149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.183509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.183530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.183538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.194233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.194255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.194264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.204554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.204575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.204586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.213152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.213173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.213182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.224192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.224213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.224222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.232488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.232509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.232517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.244686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.244708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.244717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.255674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.255696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.255704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.264254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.264275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.264283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.274409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.274428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.274437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.283747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.283767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.283775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.293195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.293215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.293223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.302587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.302608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.302616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.311991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.312011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.312019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.320363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.320383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.320395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.329929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.329950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.329958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.339084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.339104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.339112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.348969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.348989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.348997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.369 [2024-07-15 12:25:11.358150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.369 [2024-07-15 12:25:11.358171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.369 [2024-07-15 12:25:11.358179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.367937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.367960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.367969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.376567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.376588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.376597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.386617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.386637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.386645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.394821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.394842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.394850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.405177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.405202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.405210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.414443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.414464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.414472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.423572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.423593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.423601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.432330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.432350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.432358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.442282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.442302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.442311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.451950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.451972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.451980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.462538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.462558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.462567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.472027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.472048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.472056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.480243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.480264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.480273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.489455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.489477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.489486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.500988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.501009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.627 [2024-07-15 12:25:11.501017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.627 [2024-07-15 12:25:11.509369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.627 [2024-07-15 12:25:11.509391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.509399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.522208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.522235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.522244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.532844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.532864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.532872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.541580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.541601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.541609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.552416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.552437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.552445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.561437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.561457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.561465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.572449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.572469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.572481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.584327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.584348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.584356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.592756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.592776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.592785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.604935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.604955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.604964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.613361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.613382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.613390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.628 [2024-07-15 12:25:11.625289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.628 [2024-07-15 12:25:11.625312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.628 [2024-07-15 12:25:11.625320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.886 [2024-07-15 12:25:11.637316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.886 [2024-07-15 12:25:11.637338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.886 [2024-07-15 12:25:11.637347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.886 [2024-07-15 12:25:11.645681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.886 [2024-07-15 12:25:11.645702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.886 [2024-07-15 12:25:11.645711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.886 [2024-07-15 12:25:11.658051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.886 [2024-07-15 12:25:11.658073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.886 [2024-07-15 12:25:11.658081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.886 [2024-07-15 12:25:11.667806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.886 [2024-07-15 12:25:11.667826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.886 [2024-07-15 12:25:11.667834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.886 [2024-07-15 12:25:11.676631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.886 [2024-07-15 12:25:11.676651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.886 [2024-07-15 12:25:11.676660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.886 [2024-07-15 12:25:11.688522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.886 [2024-07-15 12:25:11.688542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.886 [2024-07-15 12:25:11.688550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.886 [2024-07-15 12:25:11.697177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.886 [2024-07-15 12:25:11.697197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.886 [2024-07-15 12:25:11.697205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.886 [2024-07-15 12:25:11.709563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.886 [2024-07-15 12:25:11.709584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.886 [2024-07-15 12:25:11.709592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.886 [2024-07-15 12:25:11.723393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.723414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.723422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.735392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.735413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.735421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.743873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.743893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.743901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.756305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.756326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.756337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.764746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.764766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.764775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.776787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.776808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.776816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.788306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.788328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.788336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.796299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.796320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.796328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.807900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.807921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.807929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.819776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.819795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.819804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.828173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.828194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.828202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.837757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.837777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.837786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.847277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.847301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.847310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.858735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.858756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.858764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.866686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.866706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.866715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.887 [2024-07-15 12:25:11.878751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:21.887 [2024-07-15 12:25:11.878772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.887 [2024-07-15 12:25:11.878779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.143 [2024-07-15 12:25:11.889769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:22.144 [2024-07-15 12:25:11.889792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.144 [2024-07-15 12:25:11.889801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.144 [2024-07-15 12:25:11.898536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:22.144 [2024-07-15 12:25:11.898559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.144 [2024-07-15 12:25:11.898567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.144 [2024-07-15 12:25:11.910203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14a99a0) 00:35:22.144 [2024-07-15 12:25:11.910228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.144 [2024-07-15 12:25:11.910236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.144 00:35:22.144 Latency(us) 00:35:22.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.144 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:22.144 nvme0n1 : 2.01 25514.30 99.67 0.00 0.00 5012.19 2535.96 16298.52 00:35:22.144 =================================================================================================================== 00:35:22.144 Total : 25514.30 99.67 0.00 0.00 5012.19 2535.96 16298.52 00:35:22.144 0 00:35:22.144 12:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:22.144 12:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:22.144 12:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:22.144 | .driver_specific 00:35:22.144 | .nvme_error 00:35:22.144 | .status_code 00:35:22.144 | .command_transient_transport_error' 00:35:22.144 12:25:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:22.144 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:35:22.144 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1355148 00:35:22.144 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1355148 ']' 00:35:22.144 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1355148 00:35:22.144 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:22.144 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:22.144 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1355148 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1355148' 00:35:22.401 killing process with pid 1355148 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1355148 00:35:22.401 Received shutdown signal, test time was about 2.000000 seconds 00:35:22.401 00:35:22.401 Latency(us) 00:35:22.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.401 =================================================================================================================== 00:35:22.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1355148 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1355659 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1355659 /var/tmp/bperf.sock 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1355659 ']' 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:22.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.401 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:22.401 [2024-07-15 12:25:12.388231] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:22.401 [2024-07-15 12:25:12.388279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355659 ] 00:35:22.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:22.401 Zero copy mechanism will not be used. 00:35:22.658 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.659 [2024-07-15 12:25:12.456787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.659 [2024-07-15 12:25:12.494369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.659 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:22.659 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:22.659 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.659 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.915 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:22.916 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.916 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.916 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.916 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.916 12:25:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:23.173 nvme0n1 00:35:23.173 12:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:23.173 12:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.173 12:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:23.173 12:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.173 12:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:23.173 12:25:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:23.173 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:23.173 Zero copy mechanism will not be used. 00:35:23.173 Running I/O for 2 seconds... 00:35:23.173 [2024-07-15 12:25:13.163429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.173 [2024-07-15 12:25:13.163462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.173 [2024-07-15 12:25:13.163473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.173 [2024-07-15 12:25:13.170726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.173 [2024-07-15 12:25:13.170753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.173 [2024-07-15 12:25:13.170762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.178475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.178498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.178511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.185855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.185877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.185886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.193139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.193160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.193168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.199928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.199950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.199958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.206630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.206652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.206660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.213114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.213136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.213145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.220049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.220071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.220079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.226987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.227009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.227017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.233766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.233788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.233797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.240819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.240846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.240854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.247566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.247590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.247599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.254055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.254079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.254089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.260804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.260827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.260836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.268169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.268191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.268200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.274547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.274569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.274578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.280874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.280896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.280905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.287490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.287513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.436 [2024-07-15 12:25:13.287521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.436 [2024-07-15 12:25:13.292589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.436 [2024-07-15 12:25:13.292611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.292619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.298829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.298851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.298859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.305442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.305464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.305473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.312046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.312069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.312077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.319376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.319398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.319406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.326135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.326157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.326165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.333010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.333033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.333041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.339864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.339886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.339894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.346430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.346452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.346460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.352844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.352866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.352878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.359582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.359603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.359611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.366073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.366095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.366103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.373008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.373030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.373038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.379717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.379739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.379747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.386355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.386376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.386384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.393714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.393736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.393744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.400163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.400185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.400193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.406552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.406574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.406582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.412079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.412105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.412114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.418026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.418047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.418056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.423961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.423983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.423991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.437 [2024-07-15 12:25:13.429463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.437 [2024-07-15 12:25:13.429487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.437 [2024-07-15 12:25:13.429495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.435235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.435259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.435268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.441096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.441119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.441128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.446587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.446609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.446618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.452174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.452195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.452204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.457909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.457930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.457939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.463602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.463625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.463633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.469248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.469269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.469277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.474903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.474925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.474933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.480656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.480678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.480686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.486393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.486414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.486422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.492129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.492151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.492159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.497862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.497884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.497892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.503573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.503595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.503604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.509277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.509298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.509310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.514982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.515004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.515012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.520636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.520659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.520667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.526286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.526308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.526316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.531954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.531975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.531984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.537367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.537388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.537396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.542764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.542785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.542793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.548193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.548213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.548222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.553786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.553808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.553816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.559487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.559511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.559519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.565770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.565792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.565801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.573129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.573152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.573160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.580107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.580130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.580138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.587273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.587294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.587303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.593988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.594009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.594018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.601412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.601434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.601443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.609160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.609182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.609191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.615474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.615497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.615506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.621898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.621920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.621929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.629069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.629091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.629100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.635674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.635696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.635705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.639441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.639462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.639471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.645338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.645360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.645368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.650688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.650709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.650717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.656026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.656048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.656056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.661211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.661240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.661249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.666365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.666386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.666398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.671626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.671649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.697 [2024-07-15 12:25:13.671658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.697 [2024-07-15 12:25:13.677074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.697 [2024-07-15 12:25:13.677097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.698 [2024-07-15 12:25:13.677105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.698 [2024-07-15 12:25:13.682475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.698 [2024-07-15 12:25:13.682496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.698 [2024-07-15 12:25:13.682504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.698 [2024-07-15 12:25:13.687900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:23.698 [2024-07-15 12:25:13.687922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.698 [2024-07-15 12:25:13.687930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.693536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.693561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.693570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.699106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.699132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.699144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.704672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.704699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.704711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.710315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.710341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.710354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.715929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.715953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.715961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.721500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.721525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.721534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.727091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.727114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.727122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.732669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.732692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.732700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.738293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.738315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.738323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.743998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.744021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.744029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.749460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.749483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.749493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.754931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.754954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.019 [2024-07-15 12:25:13.754962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.019 [2024-07-15 12:25:13.760641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.019 [2024-07-15 12:25:13.760664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.760672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.766263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.766285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.766293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.771973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.771995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.772003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.777592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.777614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.777622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.783250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.783271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.783279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.788948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.788970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.788978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.794733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.794755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.794763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.800238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.800267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.805843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.805865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.805873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.811541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.811562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.811573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.817276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.817298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.817307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.822994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.823016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.823024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.828677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.828699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.828706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.834370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.834391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.834399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.840103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.840126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.840134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.845863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.845885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.845893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.851472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.851495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.851502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.857057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.857079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.857087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.862802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.862824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.862832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.868486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.868509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.868517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.874044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.874065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.874073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.879768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.879790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.879799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.885549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.885570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.885578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.891300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.020 [2024-07-15 12:25:13.891322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.020 [2024-07-15 12:25:13.891329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.020 [2024-07-15 12:25:13.897029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.897050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.897058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.902683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.902704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.902713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.908361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.908383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.908394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.913713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.913736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.913744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.919163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.919185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.919194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.924510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.924532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.924540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.929881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.929902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.929910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.935288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.935310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.935319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.940746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.940767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.940775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.946328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.946350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.946357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.951858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.951880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.951888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.957655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.957681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.957688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.962876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.962898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.962906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.968339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.968361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.968369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.973685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.973707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.973715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.978961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.978983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.978992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.984382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.984404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.984413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.989826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.989849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.989857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.021 [2024-07-15 12:25:13.995304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.021 [2024-07-15 12:25:13.995326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.021 [2024-07-15 12:25:13.995334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.022 [2024-07-15 12:25:14.000887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.022 [2024-07-15 12:25:14.000910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.022 [2024-07-15 12:25:14.000919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.022 [2024-07-15 12:25:14.006423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.022 [2024-07-15 12:25:14.006446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.022 [2024-07-15 12:25:14.006454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.022 [2024-07-15 12:25:14.012200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.022 [2024-07-15 12:25:14.012222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.022 [2024-07-15 12:25:14.012237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.017996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.018020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.018029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.023541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.023563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.023571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.029284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.029305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.029314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.035077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.035099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.035108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.040778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.040800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.040808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.046587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.046609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.046617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.052356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.052378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.052389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.058098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.058119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.058128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.063813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.063836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.063845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.069611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.069633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.069641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.075338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.075360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.075368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.081015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.081037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.081045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.086826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.086849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.086857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.092448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.092469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.092478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.095581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.095603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.095612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.102369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.102395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.102404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.109941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.109964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.109972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.116905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.116927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.116935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.123296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.123318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.123326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.128900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.128922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.128930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.134892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.134914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.134922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.140845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.140868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.140876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.146968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.146989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.146997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.152683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.152704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.152712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.158622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.282 [2024-07-15 12:25:14.158643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.282 [2024-07-15 12:25:14.158651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.282 [2024-07-15 12:25:14.165041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.165062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.165070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.171039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.171060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.171068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.176915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.176937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.176945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.182425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.182448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.182456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.187948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.187970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.187979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.193861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.193883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.193891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.199763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.199785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.199793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.205240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.205261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.205273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.210988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.211009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.211017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.216450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.216473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.216482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.221886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.221909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.221917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.227723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.227745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.227754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.233288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.233310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.233318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.239700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.239723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.239731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.245434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.245456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.245465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.250913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.250935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.250944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.256330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.256356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.256364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.261505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.261530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.261539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.264711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.264734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.264745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.270912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.270934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.270942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.283 [2024-07-15 12:25:14.276927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.283 [2024-07-15 12:25:14.276949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.283 [2024-07-15 12:25:14.276958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.282459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.282481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.282489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.288053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.288075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.288084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.293937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.293958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.293966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.300148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.300169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.300177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.306884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.306906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.306915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.314596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.314617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.314626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.322528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.322550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.322558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.331277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.331300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.331308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.339681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.339703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.339712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.543 [2024-07-15 12:25:14.348082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.543 [2024-07-15 12:25:14.348103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.543 [2024-07-15 12:25:14.348112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.357370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.357392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.357401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.366602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.366625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.366634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.376109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.376132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.376144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.385408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.385430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.385439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.394424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.394445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.394454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.402205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.402235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.402245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.411577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.411600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.411608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.420579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.420601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.420610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.428721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.428747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.428755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.436261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.436284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.436293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.443717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.443739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.443747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.450679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.450706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.450714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.457139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.457160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.457168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.463661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.463683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.463691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.469620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.469643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.469650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.476757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.476778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.476787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.484365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.484386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.484394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.491453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.491475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.491483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.498089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.498110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.498119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.504683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.504709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.504717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.511220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.511245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.511253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.518157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.518179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.518187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.525028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.525050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.525059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.531071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.531094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.531102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.544 [2024-07-15 12:25:14.537785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.544 [2024-07-15 12:25:14.537807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.544 [2024-07-15 12:25:14.537815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.803 [2024-07-15 12:25:14.543968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.803 [2024-07-15 12:25:14.543990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.803 [2024-07-15 12:25:14.543999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.803 [2024-07-15 12:25:14.550080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.803 [2024-07-15 12:25:14.550102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.803 [2024-07-15 12:25:14.550110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.803 [2024-07-15 12:25:14.556416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.556437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.556445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.562675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.562697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.562711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.570412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.570434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.570443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.579282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.579304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.579312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.588081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.588106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.588115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.597060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.597083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.597091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.605398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.605420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.605429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.614772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.614795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.614804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.623824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.623847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.623856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.631938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.631961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.631970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.640984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.641011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.641020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.650875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.650899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.650908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.659759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.659782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.659791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.669089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.669112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.669121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.678314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.678338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.678347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.687694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.687717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.687726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.696668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.696690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.696698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.704390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.704413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.704422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.713012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.713034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.713043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.721376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.721399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.721408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.730732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.730755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.730763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.740339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.740361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.740370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.749512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.749534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.749543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.758430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.758452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.758461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.768057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.768080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.768089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.776982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.777005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.777013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.786089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.786111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.786120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.804 [2024-07-15 12:25:14.794893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:24.804 [2024-07-15 12:25:14.794915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.804 [2024-07-15 12:25:14.794928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.803718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.803740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.803748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.812036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.812057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.812065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.819656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.819678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.819687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.826936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.826958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.826966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.834087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.834108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.834117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.841867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.841889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.841898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.850063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.850085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.850093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.857288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.857310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.857318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.863662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.863684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.863693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.869858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.869879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.869887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.877303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.877325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.877333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.884516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.884538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.884546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.891507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.891528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.891536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.898579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.898601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.898609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.905981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.906002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.906010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.063 [2024-07-15 12:25:14.913159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.063 [2024-07-15 12:25:14.913180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.063 [2024-07-15 12:25:14.913189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.920501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.920523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.920536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.928447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.928469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.928477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.936479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.936500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.936509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.944786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.944809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.944817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.952991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.953013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.953021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.960705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.960727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.960735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.969420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.969442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.969450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.978258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.978279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.978288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.986587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.986610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.986618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:14.994100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:14.994126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:14.994134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.001003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.001025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.001033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.007547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.007568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.007576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.014359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.014381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.014389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.021077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.021098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.021107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.027504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.027525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.027533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.034019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.034039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.034047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.040489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.040510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.040518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.046601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.046622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.046630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.052743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.052764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.052772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.064 [2024-07-15 12:25:15.058391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.064 [2024-07-15 12:25:15.058413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.064 [2024-07-15 12:25:15.058422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.064012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.064034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.064042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.070022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.070044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.070053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.076735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.076757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.076765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.083333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.083355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.083364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.089388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.089410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.089418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.095623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.095645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.095653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.102159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.102181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.102193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.108260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.108281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.108289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.114127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.114148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.114156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.119852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.119875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.119883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.125776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.125797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.125805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.131680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.131703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.131711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.323 [2024-07-15 12:25:15.137628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.323 [2024-07-15 12:25:15.137652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.323 [2024-07-15 12:25:15.137660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:25.324 [2024-07-15 12:25:15.143453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.324 [2024-07-15 12:25:15.143475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.324 [2024-07-15 12:25:15.143483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:25.324 [2024-07-15 12:25:15.149365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.324 [2024-07-15 12:25:15.149387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.324 [2024-07-15 12:25:15.149396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:25.324 [2024-07-15 12:25:15.155395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x77d140) 00:35:25.324 [2024-07-15 12:25:15.155422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.324 [2024-07-15 12:25:15.155430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:25.324 00:35:25.324 Latency(us) 00:35:25.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.324 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:25.324 nvme0n1 : 2.00 4750.30 593.79 0.00 0.00 3365.56 683.85 9744.92 00:35:25.324 =================================================================================================================== 00:35:25.324 Total : 4750.30 593.79 0.00 0.00 3365.56 683.85 9744.92 00:35:25.324 0 00:35:25.324 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:25.324 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:25.324 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:25.324 | .driver_specific 00:35:25.324 | .nvme_error 00:35:25.324 | .status_code 00:35:25.324 | .command_transient_transport_error' 00:35:25.324 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 306 > 0 )) 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1355659 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1355659 ']' 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1355659 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1355659 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1355659' 00:35:25.583 killing process with pid 1355659 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1355659 00:35:25.583 Received shutdown signal, test time was about 2.000000 seconds 00:35:25.583 00:35:25.583 Latency(us) 00:35:25.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.583 =================================================================================================================== 00:35:25.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1355659 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1356313 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1356313 /var/tmp/bperf.sock 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1356313 ']' 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:25.583 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:25.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:25.841 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:25.841 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:25.841 [2024-07-15 12:25:15.622303] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:25.841 [2024-07-15 12:25:15.622352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356313 ] 00:35:25.841 EAL: No free 2048 kB hugepages reported on node 1 00:35:25.841 [2024-07-15 12:25:15.690343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.841 [2024-07-15 12:25:15.731178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.841 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:25.841 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:25.841 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:25.841 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:26.099 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:26.099 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.099 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.099 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.099 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.099 12:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.666 nvme0n1 00:35:26.666 12:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:26.666 12:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.666 12:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:26.666 12:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.666 12:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:26.666 12:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:26.666 Running I/O for 2 seconds... 00:35:26.666 [2024-07-15 12:25:16.509681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ed0b0 00:35:26.666 [2024-07-15 12:25:16.510541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.510575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.518093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fef90 00:35:26.666 [2024-07-15 12:25:16.518816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.518837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.528462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e27f0 00:35:26.666 [2024-07-15 12:25:16.529290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.529311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.537989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f9b30 00:35:26.666 [2024-07-15 12:25:16.538936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.538955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.547346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f8a50 00:35:26.666 [2024-07-15 12:25:16.548308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.548328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.556586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f7970 00:35:26.666 [2024-07-15 12:25:16.557550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.557569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.565747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ed4e8 00:35:26.666 [2024-07-15 12:25:16.566713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.566731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.574918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ee5c8 00:35:26.666 [2024-07-15 12:25:16.575932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.575951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.584123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ef6a8 00:35:26.666 [2024-07-15 12:25:16.585114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.585133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.594507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f0ff8 00:35:26.666 [2024-07-15 12:25:16.595856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.595875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.603626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e1f80 00:35:26.666 [2024-07-15 12:25:16.605051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.605069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.612145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e8d30 00:35:26.666 [2024-07-15 12:25:16.613208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.613231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.621176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e7c50 00:35:26.666 [2024-07-15 12:25:16.622233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.622253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.630392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ec408 00:35:26.666 [2024-07-15 12:25:16.631473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.631491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:26.666 [2024-07-15 12:25:16.639758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f2948 00:35:26.666 [2024-07-15 12:25:16.640869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.666 [2024-07-15 12:25:16.640888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:26.667 [2024-07-15 12:25:16.649032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f1868 00:35:26.667 [2024-07-15 12:25:16.650091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.667 [2024-07-15 12:25:16.650109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:26.667 [2024-07-15 12:25:16.658201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190eff18 00:35:26.667 [2024-07-15 12:25:16.659309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.667 [2024-07-15 12:25:16.659328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.667514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f96f8 00:35:26.925 [2024-07-15 12:25:16.668569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.668588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.676677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f8618 00:35:26.925 [2024-07-15 12:25:16.677744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.677762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.685955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e5220 00:35:26.925 [2024-07-15 12:25:16.686936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.686955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.695053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f8618 00:35:26.925 [2024-07-15 12:25:16.696030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.696049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.703506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fe720 00:35:26.925 [2024-07-15 12:25:16.704466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.704483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.712221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e1f80 00:35:26.925 [2024-07-15 12:25:16.712999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.713016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.721412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190dece0 00:35:26.925 [2024-07-15 12:25:16.722176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.722195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.730118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fa7d8 00:35:26.925 [2024-07-15 12:25:16.730918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.730936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.739709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f35f0 00:35:26.925 [2024-07-15 12:25:16.740528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.740547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.749320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e38d0 00:35:26.925 [2024-07-15 12:25:16.750331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.750352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.759518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e5a90 00:35:26.925 [2024-07-15 12:25:16.760679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.760698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.768809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e6b70 00:35:26.925 [2024-07-15 12:25:16.770002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.770022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.778170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190de470 00:35:26.925 [2024-07-15 12:25:16.779334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.779352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.787328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e3498 00:35:26.925 [2024-07-15 12:25:16.788486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.788505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.796444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e23b8 00:35:26.925 [2024-07-15 12:25:16.797517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.797536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.805668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6020 00:35:26.925 [2024-07-15 12:25:16.806715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.806733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.813846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e12d8 00:35:26.925 [2024-07-15 12:25:16.815294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.815312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.822536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f9b30 00:35:26.925 [2024-07-15 12:25:16.823305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.823324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:26.925 [2024-07-15 12:25:16.831742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fac10 00:35:26.925 [2024-07-15 12:25:16.832436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.925 [2024-07-15 12:25:16.832455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:26.926 [2024-07-15 12:25:16.840923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fbcf0 00:35:26.926 [2024-07-15 12:25:16.841706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.926 [2024-07-15 12:25:16.841725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:26.926 [2024-07-15 12:25:16.850087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f5378 00:35:26.926 [2024-07-15 12:25:16.850763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.926 [2024-07-15 12:25:16.850783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:26.926 [2024-07-15 12:25:16.859549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190dfdc0 00:35:26.926 [2024-07-15 12:25:16.860450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.926 [2024-07-15 12:25:16.860469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:26.926 [2024-07-15 12:25:16.870030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f8e88 00:35:26.926 [2024-07-15 12:25:16.871301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.926 [2024-07-15 12:25:16.871320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:26.926 [2024-07-15 12:25:16.878124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f20d8 00:35:26.926 [2024-07-15 12:25:16.878788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.926 [2024-07-15 12:25:16.878806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:26.926 [2024-07-15 12:25:16.887102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190df550 00:35:26.926 [2024-07-15 12:25:16.887892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.926 [2024-07-15 12:25:16.887911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:26.926 [2024-07-15 12:25:16.895640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f0ff8 00:35:26.926 [2024-07-15 12:25:16.896479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.926 [2024-07-15 12:25:16.896498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:26.926 [2024-07-15 12:25:16.905279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e1710 00:35:26.926 [2024-07-15 12:25:16.906223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.926 [2024-07-15 12:25:16.906245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:26.926 [2024-07-15 12:25:16.914850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f8e88 00:35:26.926 [2024-07-15 12:25:16.915856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:26.926 [2024-07-15 12:25:16.915875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:27.185 [2024-07-15 12:25:16.924050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e8d30 00:35:27.185 [2024-07-15 12:25:16.925171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.185 [2024-07-15 12:25:16.925190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.185 [2024-07-15 12:25:16.932728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f1868 00:35:27.186 [2024-07-15 12:25:16.933463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:16.933482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:16.941757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f3a28 00:35:27.186 [2024-07-15 12:25:16.942477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:16.942496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:16.950889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e5220 00:35:27.186 [2024-07-15 12:25:16.951608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:16.951626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:16.960029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ef6a8 00:35:27.186 [2024-07-15 12:25:16.960767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:16.960785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:16.969235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190efae0 00:35:27.186 [2024-07-15 12:25:16.969891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:16.969911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:16.978769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190df118 00:35:27.186 [2024-07-15 12:25:16.979277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:16.979297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:16.988083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e1f80 00:35:27.186 [2024-07-15 12:25:16.988922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:16.988944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:16.997256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e7818 00:35:27.186 [2024-07-15 12:25:16.998002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:16.998021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.006477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6458 00:35:27.186 [2024-07-15 12:25:17.007327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.007346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.015617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f9f68 00:35:27.186 [2024-07-15 12:25:17.016501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.016519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.024959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fcdd0 00:35:27.186 [2024-07-15 12:25:17.025852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.025871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.034157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190eff18 00:35:27.186 [2024-07-15 12:25:17.035021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.035039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.043344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f96f8 00:35:27.186 [2024-07-15 12:25:17.044230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.044249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.052542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f1430 00:35:27.186 [2024-07-15 12:25:17.053394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.053412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.061706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f2948 00:35:27.186 [2024-07-15 12:25:17.062568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.062586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.070846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f4298 00:35:27.186 [2024-07-15 12:25:17.071736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.071754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.080113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fb480 00:35:27.186 [2024-07-15 12:25:17.080967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.080985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.089258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fa3a0 00:35:27.186 [2024-07-15 12:25:17.090109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.090127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.098383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fef90 00:35:27.186 [2024-07-15 12:25:17.099266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.099285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.107564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e6300 00:35:27.186 [2024-07-15 12:25:17.108447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.108466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.116703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ddc00 00:35:27.186 [2024-07-15 12:25:17.117602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.117621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.125875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fdeb0 00:35:27.186 [2024-07-15 12:25:17.126758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.126776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.135060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e2c28 00:35:27.186 [2024-07-15 12:25:17.135917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.135936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.144200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e73e0 00:35:27.186 [2024-07-15 12:25:17.145058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.145076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.153343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6020 00:35:27.186 [2024-07-15 12:25:17.154190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.154209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.162487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190eee38 00:35:27.186 [2024-07-15 12:25:17.163338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.163357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.171620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f0788 00:35:27.186 [2024-07-15 12:25:17.172467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.172485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.186 [2024-07-15 12:25:17.180843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e5658 00:35:27.186 [2024-07-15 12:25:17.181754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.186 [2024-07-15 12:25:17.181772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.190151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f92c0 00:35:27.445 [2024-07-15 12:25:17.190997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.191015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.199309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f0ff8 00:35:27.445 [2024-07-15 12:25:17.200161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.200179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.208450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f2510 00:35:27.445 [2024-07-15 12:25:17.209301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.209319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.217635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ea248 00:35:27.445 [2024-07-15 12:25:17.218539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.218557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.226824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f7100 00:35:27.445 [2024-07-15 12:25:17.227671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.227692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.235981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fb8b8 00:35:27.445 [2024-07-15 12:25:17.236852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.236870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.245128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fa7d8 00:35:27.445 [2024-07-15 12:25:17.245989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.246007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.254276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fda78 00:35:27.445 [2024-07-15 12:25:17.255090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.255108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.263519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e6fa8 00:35:27.445 [2024-07-15 12:25:17.264381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.264399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.272668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190de8a8 00:35:27.445 [2024-07-15 12:25:17.273509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.273527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.282016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e3060 00:35:27.445 [2024-07-15 12:25:17.282902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.282921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.291149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e1f80 00:35:27.445 [2024-07-15 12:25:17.292003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.292021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.300308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e7818 00:35:27.445 [2024-07-15 12:25:17.301190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.301209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.309447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6458 00:35:27.445 [2024-07-15 12:25:17.310312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.310333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.318655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f9f68 00:35:27.445 [2024-07-15 12:25:17.319508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.319527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.327802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fcdd0 00:35:27.445 [2024-07-15 12:25:17.328662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.328681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.336927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190eff18 00:35:27.445 [2024-07-15 12:25:17.337791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.337809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.346080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f96f8 00:35:27.445 [2024-07-15 12:25:17.346941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.445 [2024-07-15 12:25:17.346959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.445 [2024-07-15 12:25:17.355221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f1430 00:35:27.446 [2024-07-15 12:25:17.356077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.356095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.446 [2024-07-15 12:25:17.364369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f2948 00:35:27.446 [2024-07-15 12:25:17.365208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.365229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.446 [2024-07-15 12:25:17.373529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f4298 00:35:27.446 [2024-07-15 12:25:17.374417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.374436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.446 [2024-07-15 12:25:17.382729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fb480 00:35:27.446 [2024-07-15 12:25:17.383603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.383622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.446 [2024-07-15 12:25:17.391971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fa3a0 00:35:27.446 [2024-07-15 12:25:17.392849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.392867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.446 [2024-07-15 12:25:17.401131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fef90 00:35:27.446 [2024-07-15 12:25:17.401977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.401996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.446 [2024-07-15 12:25:17.410282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e6300 00:35:27.446 [2024-07-15 12:25:17.411165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.411183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.446 [2024-07-15 12:25:17.419446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ddc00 00:35:27.446 [2024-07-15 12:25:17.420277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.420296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.446 [2024-07-15 12:25:17.428632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fdeb0 00:35:27.446 [2024-07-15 12:25:17.429510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.429529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.446 [2024-07-15 12:25:17.437782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e2c28 00:35:27.446 [2024-07-15 12:25:17.438666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.446 [2024-07-15 12:25:17.438684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.447120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e73e0 00:35:27.705 [2024-07-15 12:25:17.448184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.448203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.456529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6020 00:35:27.705 [2024-07-15 12:25:17.457408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.457426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.465674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190eee38 00:35:27.705 [2024-07-15 12:25:17.466518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.466536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.475067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e12d8 00:35:27.705 [2024-07-15 12:25:17.475702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.475721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.484468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fd640 00:35:27.705 [2024-07-15 12:25:17.485451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.485469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.493625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e6b70 00:35:27.705 [2024-07-15 12:25:17.494608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.494626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.502795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190de470 00:35:27.705 [2024-07-15 12:25:17.503769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.503787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.511975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e3498 00:35:27.705 [2024-07-15 12:25:17.512948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.512967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.521157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e23b8 00:35:27.705 [2024-07-15 12:25:17.522140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.522158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.530362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f81e0 00:35:27.705 [2024-07-15 12:25:17.531366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.531385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.539811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ecc78 00:35:27.705 [2024-07-15 12:25:17.540790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.540809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.548986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e1710 00:35:27.705 [2024-07-15 12:25:17.549867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.549888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.559348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e0630 00:35:27.705 [2024-07-15 12:25:17.560819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.560838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.568917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e6300 00:35:27.705 [2024-07-15 12:25:17.570493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.570512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.575388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f0ff8 00:35:27.705 [2024-07-15 12:25:17.576144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.576163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.585896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f2510 00:35:27.705 [2024-07-15 12:25:17.587070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.587090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.595499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190df118 00:35:27.705 [2024-07-15 12:25:17.596817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.596836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:27.705 [2024-07-15 12:25:17.603573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f1ca0 00:35:27.705 [2024-07-15 12:25:17.604197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.705 [2024-07-15 12:25:17.604216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.612846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fef90 00:35:27.706 [2024-07-15 12:25:17.613819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.613838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.621580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ed0b0 00:35:27.706 [2024-07-15 12:25:17.622551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.622569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.631144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e01f8 00:35:27.706 [2024-07-15 12:25:17.632219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.632241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.640716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6cc8 00:35:27.706 [2024-07-15 12:25:17.641904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.641923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.650273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ebfd0 00:35:27.706 [2024-07-15 12:25:17.651585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.651604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.659848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fb480 00:35:27.706 [2024-07-15 12:25:17.661279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.661297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.668348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e88f8 00:35:27.706 [2024-07-15 12:25:17.669334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.669352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.676780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e73e0 00:35:27.706 [2024-07-15 12:25:17.677846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.677864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.685313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fc128 00:35:27.706 [2024-07-15 12:25:17.685921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.685940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:27.706 [2024-07-15 12:25:17.693737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e5220 00:35:27.706 [2024-07-15 12:25:17.694427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.706 [2024-07-15 12:25:17.694446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.704022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ed920 00:35:27.965 [2024-07-15 12:25:17.704791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.704812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.712714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6890 00:35:27.965 [2024-07-15 12:25:17.713523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.713542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.722341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e7818 00:35:27.965 [2024-07-15 12:25:17.723291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.723310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.732535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f4b08 00:35:27.965 [2024-07-15 12:25:17.733511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.733531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.740958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6cc8 00:35:27.965 [2024-07-15 12:25:17.742238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.742256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.749434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f7538 00:35:27.965 [2024-07-15 12:25:17.750033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.750052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.758796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f7da8 00:35:27.965 [2024-07-15 12:25:17.759254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.759272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.768100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ed4e8 00:35:27.965 [2024-07-15 12:25:17.768807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.768826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.777517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190eff18 00:35:27.965 [2024-07-15 12:25:17.778431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.778465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.786352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190efae0 00:35:27.965 [2024-07-15 12:25:17.787271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.787293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.796376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e6fa8 00:35:27.965 [2024-07-15 12:25:17.797410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.797430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.806037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e23b8 00:35:27.965 [2024-07-15 12:25:17.807194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.807213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.815603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ee190 00:35:27.965 [2024-07-15 12:25:17.816881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.816899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.824112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e9168 00:35:27.965 [2024-07-15 12:25:17.824941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.824959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.833204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fe720 00:35:27.965 [2024-07-15 12:25:17.834026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.834045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.842637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190df118 00:35:27.965 [2024-07-15 12:25:17.843661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.843679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.851301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fb048 00:35:27.965 [2024-07-15 12:25:17.852326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.852344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.860900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e8d30 00:35:27.965 [2024-07-15 12:25:17.862049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.862067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.870466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fdeb0 00:35:27.965 [2024-07-15 12:25:17.871725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.871744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.880087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e2c28 00:35:27.965 [2024-07-15 12:25:17.881481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.881499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.888178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190eee38 00:35:27.965 [2024-07-15 12:25:17.888879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.888897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.897477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e7818 00:35:27.965 [2024-07-15 12:25:17.898415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.898433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.907090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f57b0 00:35:27.965 [2024-07-15 12:25:17.908288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.908308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.915770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f7da8 00:35:27.965 [2024-07-15 12:25:17.916903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.916922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.925405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f8a50 00:35:27.965 [2024-07-15 12:25:17.926639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.926658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.934990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e3498 00:35:27.965 [2024-07-15 12:25:17.936371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.936389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.944572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e01f8 00:35:27.965 [2024-07-15 12:25:17.946072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.946091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.951043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e23b8 00:35:27.965 [2024-07-15 12:25:17.951683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.951702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:27.965 [2024-07-15 12:25:17.960422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e3060 00:35:27.965 [2024-07-15 12:25:17.961009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:27.965 [2024-07-15 12:25:17.961028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:17.970106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fb480 00:35:28.224 [2024-07-15 12:25:17.970886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:17.970904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:17.980619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e38d0 00:35:28.224 [2024-07-15 12:25:17.981823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:17.981841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:17.990244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fef90 00:35:28.224 [2024-07-15 12:25:17.991529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:17.991548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:17.999836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e99d8 00:35:28.224 [2024-07-15 12:25:18.001244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.001262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.006341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fcdd0 00:35:28.224 [2024-07-15 12:25:18.007017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.007037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.016855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190eff18 00:35:28.224 [2024-07-15 12:25:18.017973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.017992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.027759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190de038 00:35:28.224 [2024-07-15 12:25:18.029379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.029403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.034295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ebb98 00:35:28.224 [2024-07-15 12:25:18.035042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.035061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.044772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ed920 00:35:28.224 [2024-07-15 12:25:18.045918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.045936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.054477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ebfd0 00:35:28.224 [2024-07-15 12:25:18.055830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.055848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.063011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fef90 00:35:28.224 [2024-07-15 12:25:18.063916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.063934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.072084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f2510 00:35:28.224 [2024-07-15 12:25:18.072983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.073002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.081542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f5be8 00:35:28.224 [2024-07-15 12:25:18.082627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.082645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.091218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ef270 00:35:28.224 [2024-07-15 12:25:18.092498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.092518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.100140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fe2e8 00:35:28.224 [2024-07-15 12:25:18.101363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.101382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.109800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e4140 00:35:28.224 [2024-07-15 12:25:18.111135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.111158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.119511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f1868 00:35:28.224 [2024-07-15 12:25:18.120958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.120976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.129117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ee5c8 00:35:28.224 [2024-07-15 12:25:18.130712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.130730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.135636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f7100 00:35:28.224 [2024-07-15 12:25:18.136375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.224 [2024-07-15 12:25:18.136394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:28.224 [2024-07-15 12:25:18.144311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ebfd0 00:35:28.224 [2024-07-15 12:25:18.145064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.225 [2024-07-15 12:25:18.145082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:28.225 [2024-07-15 12:25:18.153893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e88f8 00:35:28.225 [2024-07-15 12:25:18.154665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.225 [2024-07-15 12:25:18.154684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:28.225 [2024-07-15 12:25:18.163471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f9f68 00:35:28.225 [2024-07-15 12:25:18.164445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.225 [2024-07-15 12:25:18.164464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:28.225 [2024-07-15 12:25:18.173079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190eb760 00:35:28.225 [2024-07-15 12:25:18.174182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.225 [2024-07-15 12:25:18.174201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:28.225 [2024-07-15 12:25:18.182741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ec840 00:35:28.225 [2024-07-15 12:25:18.183963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.225 [2024-07-15 12:25:18.183981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:28.225 [2024-07-15 12:25:18.192343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e88f8 00:35:28.225 [2024-07-15 12:25:18.193672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.225 [2024-07-15 12:25:18.193691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:28.225 [2024-07-15 12:25:18.201931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e6738 00:35:28.225 [2024-07-15 12:25:18.203409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.225 [2024-07-15 12:25:18.203428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:28.225 [2024-07-15 12:25:18.211590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fc998 00:35:28.225 [2024-07-15 12:25:18.213170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.225 [2024-07-15 12:25:18.213189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:28.225 [2024-07-15 12:25:18.218088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fbcf0 00:35:28.225 [2024-07-15 12:25:18.218879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.225 [2024-07-15 12:25:18.218899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:28.484 [2024-07-15 12:25:18.228825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e1b48 00:35:28.484 [2024-07-15 12:25:18.230045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.230065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.238468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ebfd0 00:35:28.485 [2024-07-15 12:25:18.239805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.239824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.248006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e9e10 00:35:28.485 [2024-07-15 12:25:18.249416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.249435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.257545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fb480 00:35:28.485 [2024-07-15 12:25:18.259161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.259180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.264185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6020 00:35:28.485 [2024-07-15 12:25:18.264920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.264940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.274658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190df118 00:35:28.485 [2024-07-15 12:25:18.275780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.275799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.284263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f2d80 00:35:28.485 [2024-07-15 12:25:18.285585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.285603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.292754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ee190 00:35:28.485 [2024-07-15 12:25:18.293626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.293645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.301190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e6fa8 00:35:28.485 [2024-07-15 12:25:18.302150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.302168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.310987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ec408 00:35:28.485 [2024-07-15 12:25:18.311977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.311997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.320658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e8088 00:35:28.485 [2024-07-15 12:25:18.321840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.321859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.330255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190df988 00:35:28.485 [2024-07-15 12:25:18.331549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.331569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.339951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f4f40 00:35:28.485 [2024-07-15 12:25:18.341330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.341348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.348102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ee5c8 00:35:28.485 [2024-07-15 12:25:18.349002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.349026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.357482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f3a28 00:35:28.485 [2024-07-15 12:25:18.358373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.358392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.366978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190df550 00:35:28.485 [2024-07-15 12:25:18.367862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.367881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.375649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f7100 00:35:28.485 [2024-07-15 12:25:18.376503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.376522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.385307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fcdd0 00:35:28.485 [2024-07-15 12:25:18.386358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.386377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.394890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ee5c8 00:35:28.485 [2024-07-15 12:25:18.396046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.396065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.403413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f92c0 00:35:28.485 [2024-07-15 12:25:18.404131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.404150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.412578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e84c0 00:35:28.485 [2024-07-15 12:25:18.413309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.413327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.420971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f4b08 00:35:28.485 [2024-07-15 12:25:18.421796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.421815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.431255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ec840 00:35:28.485 [2024-07-15 12:25:18.432123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.432143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.440675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f6cc8 00:35:28.485 [2024-07-15 12:25:18.441384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.441403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.449498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fda78 00:35:28.485 [2024-07-15 12:25:18.450732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.450752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.457376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190dfdc0 00:35:28.485 [2024-07-15 12:25:18.458048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.485 [2024-07-15 12:25:18.458067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:28.485 [2024-07-15 12:25:18.467605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190ea248 00:35:28.485 [2024-07-15 12:25:18.468327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.486 [2024-07-15 12:25:18.468346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:28.486 [2024-07-15 12:25:18.477088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190f96f8 00:35:28.486 [2024-07-15 12:25:18.478032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.486 [2024-07-15 12:25:18.478051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:28.744 [2024-07-15 12:25:18.485922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190e3498 00:35:28.744 [2024-07-15 12:25:18.486843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.744 [2024-07-15 12:25:18.486861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:28.744 [2024-07-15 12:25:18.495593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd96ce0) with pdu=0x2000190fdeb0 00:35:28.744 [2024-07-15 12:25:18.496635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:28.744 [2024-07-15 12:25:18.496653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:28.744 00:35:28.744 Latency(us) 00:35:28.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.744 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:28.744 nvme0n1 : 2.00 27715.34 108.26 0.00 0.00 4612.35 1816.49 12252.38 00:35:28.744 =================================================================================================================== 00:35:28.744 Total : 27715.34 108.26 0.00 0.00 4612.35 1816.49 12252.38 00:35:28.744 0 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:28.744 | .driver_specific 00:35:28.744 | .nvme_error 00:35:28.744 | .status_code 00:35:28.744 | .command_transient_transport_error' 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1356313 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1356313 ']' 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1356313 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:28.744 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1356313 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1356313' 00:35:29.003 killing process with pid 1356313 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1356313 00:35:29.003 Received shutdown signal, test time was about 2.000000 seconds 00:35:29.003 00:35:29.003 Latency(us) 00:35:29.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.003 =================================================================================================================== 00:35:29.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1356313 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1356784 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1356784 /var/tmp/bperf.sock 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1356784 ']' 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:29.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:29.003 12:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:29.003 [2024-07-15 12:25:18.966874] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:29.003 [2024-07-15 12:25:18.966924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1356784 ] 00:35:29.003 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:29.003 Zero copy mechanism will not be used. 00:35:29.003 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.260 [2024-07-15 12:25:19.034949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.260 [2024-07-15 12:25:19.073847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.260 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:29.260 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:35:29.260 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:29.260 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:29.518 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:29.518 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.518 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:29.518 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.518 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.518 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:30.084 nvme0n1 00:35:30.084 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:30.084 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.084 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:30.084 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.084 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:30.084 12:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:30.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:30.084 Zero copy mechanism will not be used. 00:35:30.084 Running I/O for 2 seconds... 00:35:30.084 [2024-07-15 12:25:19.898850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.899238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.899268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.905452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.905841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.905865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.913305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.913692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.913714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.920092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.920465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.920485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.927915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.928305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.928326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.934745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.935123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.935143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.941305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.941676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.941695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.949563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.949948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.949968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.956810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.957188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.957208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.964617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.964977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.964996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.971588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.971685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.971707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.978726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.979096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.979116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.986173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.986552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.986572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.992080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.992470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.992490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:19.997861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:19.998238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:19.998257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:20.003968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:20.004338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:20.004358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:20.010047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:20.010422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:20.010442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.084 [2024-07-15 12:25:20.017619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.084 [2024-07-15 12:25:20.017989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.084 [2024-07-15 12:25:20.018010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.085 [2024-07-15 12:25:20.026436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.085 [2024-07-15 12:25:20.027455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.085 [2024-07-15 12:25:20.027477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.085 [2024-07-15 12:25:20.034234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.085 [2024-07-15 12:25:20.034608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.085 [2024-07-15 12:25:20.034629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.085 [2024-07-15 12:25:20.041009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.085 [2024-07-15 12:25:20.041376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.085 [2024-07-15 12:25:20.041396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.085 [2024-07-15 12:25:20.048352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.085 [2024-07-15 12:25:20.048716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.085 [2024-07-15 12:25:20.048736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.085 [2024-07-15 12:25:20.055810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.085 [2024-07-15 12:25:20.056195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.085 [2024-07-15 12:25:20.056215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.085 [2024-07-15 12:25:20.062309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.085 [2024-07-15 12:25:20.062696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.085 [2024-07-15 12:25:20.062715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.085 [2024-07-15 12:25:20.069468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.085 [2024-07-15 12:25:20.069866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.085 [2024-07-15 12:25:20.069886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.085 [2024-07-15 12:25:20.075983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.085 [2024-07-15 12:25:20.076379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.085 [2024-07-15 12:25:20.076399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.085 [2024-07-15 12:25:20.081722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.085 [2024-07-15 12:25:20.082066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.085 [2024-07-15 12:25:20.082086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.087138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.087487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.087512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.092268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.092617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.092636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.097679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.098013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.098032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.102484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.102833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.102853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.107117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.107467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.107488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.112809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.113148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.113168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.118408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.118771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.118790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.125608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.126032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.126051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.132776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.133154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.133174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.140356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.140779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.140799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.148180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.148617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.148636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.156320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.156834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.156854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.165165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.165605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.165624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.173389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.173771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.173790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.181617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.182066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.182086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.190030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.190444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.190463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.196385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.196738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.196757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.202412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.202755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.202774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.207869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.208211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.208239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.213627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.213972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.213992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.220142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.220493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.220511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.225272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.225610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.225629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.229978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.230331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.230350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.234746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.235086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.235105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.239441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.344 [2024-07-15 12:25:20.239789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.344 [2024-07-15 12:25:20.239808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.344 [2024-07-15 12:25:20.244125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.244471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.244491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.248785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.249125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.249148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.253414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.253746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.253765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.258489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.258834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.258853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.263356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.263693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.263713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.268281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.268626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.268645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.273642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.273978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.273997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.279618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.279967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.279986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.286243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.286591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.286611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.291987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.292338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.292357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.297982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.298325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.298344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.304447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.304794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.304813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.310818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.311155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.311174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.316872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.317218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.317243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.323344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.323695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.323714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.329520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.329864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.329883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.335862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.345 [2024-07-15 12:25:20.336278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.345 [2024-07-15 12:25:20.336297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.345 [2024-07-15 12:25:20.342369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.342718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.342739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.348685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.349031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.349050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.354680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.355040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.355058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.361397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.361747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.361766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.367655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.368034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.368053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.373847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.374203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.374223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.379330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.379676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.379695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.384759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.385097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.385116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.390895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.391246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.391265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.396826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.397177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.397196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.403035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.403466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.403490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.410817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.411313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.411333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.418975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.419421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.419441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.428403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.428883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.428902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.436803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.437254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.437273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.445171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.445636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.604 [2024-07-15 12:25:20.445656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.604 [2024-07-15 12:25:20.454033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.604 [2024-07-15 12:25:20.454480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.454499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.462733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.463196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.463215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.471476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.471944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.471964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.480114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.480503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.480522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.488407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.488860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.488879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.497165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.497583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.497603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.504774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.505269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.505289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.512848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.513286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.513306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.521444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.521915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.521934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.529897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.530327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.530346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.537763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.538222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.538248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.546301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.546711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.546730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.554454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.554861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.554880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.561561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.561906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.561924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.568074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.568405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.568425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.573116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.573467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.573486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.578005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.578354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.578373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.582687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.583016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.583036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.587417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.587763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.587782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.592009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.592352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.592371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.596669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.597011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.597033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.605 [2024-07-15 12:25:20.601387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.605 [2024-07-15 12:25:20.601737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.605 [2024-07-15 12:25:20.601756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.606046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.606376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.606396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.610650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.610974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.610994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.615163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.615498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.615517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.620086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.620408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.620427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.624598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.624912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.624932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.629068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.629386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.629405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.633546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.633864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.633883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.638017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.638331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.638350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.642466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.642786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.642805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.646981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.647303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.647322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.651444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.651769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.651788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.655948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.656267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.656286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.660408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.660722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.660741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.664858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.665192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.665212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.669506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.669828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.669847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.673970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.674279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.674298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.678436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.678758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.678777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.682927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.683248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.683268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.687459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.687781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.687800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.691984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.692313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.692332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.697602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.697924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.697943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.702256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.702579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.702598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.706762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.707079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.707098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.711260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.711583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.711602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.715776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.716091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.716116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.720211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.720538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.720557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.864 [2024-07-15 12:25:20.724672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.864 [2024-07-15 12:25:20.724990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.864 [2024-07-15 12:25:20.725009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.729084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.729409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.729428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.733493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.733814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.733833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.737905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.738216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.738241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.742296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.742614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.742633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.746713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.747026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.747046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.751110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.751435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.751454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.755525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.755846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.755866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.759984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.760304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.760323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.764416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.764739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.764759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.768862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.769173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.769191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.773291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.773616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.773634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.777816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.778131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.778151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.782237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.782543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.782563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.786625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.786944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.786964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.791059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.791384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.791403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.795450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.795760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.795779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.799851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.800176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.800195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.804434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.804751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.804772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.808845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.809152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.809171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.813288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.813614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.813634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.817824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.818142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.818161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.822210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.822559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.822578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.826671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.826993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.827013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.831077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.831395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.831418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.835651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.835976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.835995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.840086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.840399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.840419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.844539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.844863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.844882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.849320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.849640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.849659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.854149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.865 [2024-07-15 12:25:20.854463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.865 [2024-07-15 12:25:20.854482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.865 [2024-07-15 12:25:20.858764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:30.866 [2024-07-15 12:25:20.859085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.866 [2024-07-15 12:25:20.859103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.863276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.863605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.124 [2024-07-15 12:25:20.863624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.868155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.868484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.124 [2024-07-15 12:25:20.868503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.873232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.873548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.124 [2024-07-15 12:25:20.873568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.878087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.878398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.124 [2024-07-15 12:25:20.878417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.883132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.883457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.124 [2024-07-15 12:25:20.883477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.889191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.889596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.124 [2024-07-15 12:25:20.889614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.896218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.896672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.124 [2024-07-15 12:25:20.896692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.903077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.903401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.124 [2024-07-15 12:25:20.903419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.909708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.910092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.124 [2024-07-15 12:25:20.910111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.124 [2024-07-15 12:25:20.916258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.124 [2024-07-15 12:25:20.916608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.916627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.923822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.924255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.924274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.930814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.931189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.931207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.937543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.937905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.937926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.944207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.944594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.944613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.950175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.950502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.950522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.955217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.955559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.955578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.960357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.960687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.960706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.966073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.966541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.966561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.970817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.971136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.971155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.975337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.975669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.975692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.980128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.980446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.980467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.985137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.985465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.985485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.990929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.991253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.991274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:20.996982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:20.997317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:20.997336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.002318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.002650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.002670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.007430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.007757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.007776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.012551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.012876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.012895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.017815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.018141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.018160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.022494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.022808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.022827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.027107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.027434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.027454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.031685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.032003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.032022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.036158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.036485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.036505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.040651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.040970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.040989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.045124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.045447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.045467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.049614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.049917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.049936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.054083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.054403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.054423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.058591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.058908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.058931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.063062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.125 [2024-07-15 12:25:21.063376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.125 [2024-07-15 12:25:21.063395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.125 [2024-07-15 12:25:21.067526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.067832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.067851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.071963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.072296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.072315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.076411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.076735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.076754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.081314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.081631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.081650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.086032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.086352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.086372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.091032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.091350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.091369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.096526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.096853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.096872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.102651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.102975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.102994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.107898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.108206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.108233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.113296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.113618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.113637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.126 [2024-07-15 12:25:21.118463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.126 [2024-07-15 12:25:21.118772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.126 [2024-07-15 12:25:21.118792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.123568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.123884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.123904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.128982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.129322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.129341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.134204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.134535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.134554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.139356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.139687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.139707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.144380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.144707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.144726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.149742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.150067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.150086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.155329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.155658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.155679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.160455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.160773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.160793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.165585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.165912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.165932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.170939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.171273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.171294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.176182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.176516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.176537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.181259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.181579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.181600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.186357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.186690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.186710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.191968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.192297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.192321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.198538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.198860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.198880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.204048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.204387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.204407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.209377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.209707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.209726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.214913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.215256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.215277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.220730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.221103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.221123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.227119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.227533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.227553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.234029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.234416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.386 [2024-07-15 12:25:21.234435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.386 [2024-07-15 12:25:21.241567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.386 [2024-07-15 12:25:21.241933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.241953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.249401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.249848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.249868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.257442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.257884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.257903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.265754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.266190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.266209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.273897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.274300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.274320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.281866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.282345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.282365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.290326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.290777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.290797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.298736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.299154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.299173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.307024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.307449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.307469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.315130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.315580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.315600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.322757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.323125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.323145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.329071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.329465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.329486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.335847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.336181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.336200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.341989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.342372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.342403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.348541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.348966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.348986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.356109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.356558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.356577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.363303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.363732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.363752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.370727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.371182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.371202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.387 [2024-07-15 12:25:21.378804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.387 [2024-07-15 12:25:21.379218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.387 [2024-07-15 12:25:21.379247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.647 [2024-07-15 12:25:21.385763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.647 [2024-07-15 12:25:21.386096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.647 [2024-07-15 12:25:21.386117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.647 [2024-07-15 12:25:21.391024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.647 [2024-07-15 12:25:21.391362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.647 [2024-07-15 12:25:21.391383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.647 [2024-07-15 12:25:21.396959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.647 [2024-07-15 12:25:21.397293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.647 [2024-07-15 12:25:21.397314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.647 [2024-07-15 12:25:21.401990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.647 [2024-07-15 12:25:21.402314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.647 [2024-07-15 12:25:21.402333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.647 [2024-07-15 12:25:21.407016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.647 [2024-07-15 12:25:21.407348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.647 [2024-07-15 12:25:21.407368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.647 [2024-07-15 12:25:21.412157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.647 [2024-07-15 12:25:21.412482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.647 [2024-07-15 12:25:21.412501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.417588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.417900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.417919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.422269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.422593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.422612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.426830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.427157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.427177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.431374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.431691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.431710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.436483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.436816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.436836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.441382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.441712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.441731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.446736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.447048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.447067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.452646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.452976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.452996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.459298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.459654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.459673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.465488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.465815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.465835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.471859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.472207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.472240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.478297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.478626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.478646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.484984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.485328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.485347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.490978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.491305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.491325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.496778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.497166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.497185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.502912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.503238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.503257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.509017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.509339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.509359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.514823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.515150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.515169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.521311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.521630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.521649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.526510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.526825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.526848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.532823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.533175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.533194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.538455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.538773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.538792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.543875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.544204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.544223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.549726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.550052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.550072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.555884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.556235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.556254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.563309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.563751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.563770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.571039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.571427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.571447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.579106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.579478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.579497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.586783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.648 [2024-07-15 12:25:21.587133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.648 [2024-07-15 12:25:21.587153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.648 [2024-07-15 12:25:21.593951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.649 [2024-07-15 12:25:21.594330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.649 [2024-07-15 12:25:21.594349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.649 [2024-07-15 12:25:21.601338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.649 [2024-07-15 12:25:21.601712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.649 [2024-07-15 12:25:21.601732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.649 [2024-07-15 12:25:21.608584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.649 [2024-07-15 12:25:21.608946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.649 [2024-07-15 12:25:21.608965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.649 [2024-07-15 12:25:21.615563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.649 [2024-07-15 12:25:21.616006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.649 [2024-07-15 12:25:21.616024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.649 [2024-07-15 12:25:21.622541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.649 [2024-07-15 12:25:21.622887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.649 [2024-07-15 12:25:21.622906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.649 [2024-07-15 12:25:21.629922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.649 [2024-07-15 12:25:21.630308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.649 [2024-07-15 12:25:21.630327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.649 [2024-07-15 12:25:21.637216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.649 [2024-07-15 12:25:21.637635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.649 [2024-07-15 12:25:21.637654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.649 [2024-07-15 12:25:21.644088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.649 [2024-07-15 12:25:21.644233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.649 [2024-07-15 12:25:21.644251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.908 [2024-07-15 12:25:21.651365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.908 [2024-07-15 12:25:21.651790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.908 [2024-07-15 12:25:21.651809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.908 [2024-07-15 12:25:21.658617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.908 [2024-07-15 12:25:21.659039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.908 [2024-07-15 12:25:21.659058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.908 [2024-07-15 12:25:21.667096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.908 [2024-07-15 12:25:21.667606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.908 [2024-07-15 12:25:21.667624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.908 [2024-07-15 12:25:21.675698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.908 [2024-07-15 12:25:21.676122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.908 [2024-07-15 12:25:21.676141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.908 [2024-07-15 12:25:21.683845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.908 [2024-07-15 12:25:21.684239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.908 [2024-07-15 12:25:21.684259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.908 [2024-07-15 12:25:21.691938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.692391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.692411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.700356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.700756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.700775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.708478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.708907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.708927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.716208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.716679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.716706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.724135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.724571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.724589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.731784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.732177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.732196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.738347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.738701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.738720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.744990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.745323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.745342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.751718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.752098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.752117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.759684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.760134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.760153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.767216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.767558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.767577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.775682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.776113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.776132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.783903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.784323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.784343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.790701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.791029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.791049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.796389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.796749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.796768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.802447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.802817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.802837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.808189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.808524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.808544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.814146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.814474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.814493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.819881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.820199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.820218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.826105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.826423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.826442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.832319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.832738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.832757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.838948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.839314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.839334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.845397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.845759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.845778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.853064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.853469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.909 [2024-07-15 12:25:21.853488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.909 [2024-07-15 12:25:21.860240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.909 [2024-07-15 12:25:21.860577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.910 [2024-07-15 12:25:21.860597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.910 [2024-07-15 12:25:21.867061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.910 [2024-07-15 12:25:21.867424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.910 [2024-07-15 12:25:21.867444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:31.910 [2024-07-15 12:25:21.874537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.910 [2024-07-15 12:25:21.874865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.910 [2024-07-15 12:25:21.874885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:31.910 [2024-07-15 12:25:21.881292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.910 [2024-07-15 12:25:21.881666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.910 [2024-07-15 12:25:21.881686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:31.910 [2024-07-15 12:25:21.888511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe8bdd0) with pdu=0x2000190fef90 00:35:31.910 [2024-07-15 12:25:21.888890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:31.910 [2024-07-15 12:25:21.888909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:31.910 00:35:31.910 Latency(us) 00:35:31.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.910 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:31.910 nvme0n1 : 2.00 5121.18 640.15 0.00 0.00 3119.71 2094.30 10542.75 00:35:31.910 =================================================================================================================== 00:35:31.910 Total : 5121.18 640.15 0.00 0.00 3119.71 2094.30 10542.75 00:35:31.910 0 00:35:32.168 12:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:32.169 12:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:32.169 12:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:32.169 | .driver_specific 00:35:32.169 | .nvme_error 00:35:32.169 | .status_code 00:35:32.169 | .command_transient_transport_error' 00:35:32.169 12:25:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 330 > 0 )) 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1356784 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1356784 ']' 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1356784 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1356784 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1356784' 00:35:32.169 killing process with pid 1356784 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1356784 00:35:32.169 Received shutdown signal, test time was about 2.000000 seconds 00:35:32.169 00:35:32.169 Latency(us) 00:35:32.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.169 =================================================================================================================== 00:35:32.169 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:32.169 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1356784 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1355127 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1355127 ']' 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1355127 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1355127 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1355127' 00:35:32.428 killing process with pid 1355127 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1355127 00:35:32.428 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1355127 00:35:32.687 00:35:32.687 real 0m13.827s 00:35:32.687 user 0m26.429s 00:35:32.687 sys 0m4.228s 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.687 ************************************ 00:35:32.687 END TEST nvmf_digest_error 00:35:32.687 ************************************ 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:32.687 rmmod nvme_tcp 00:35:32.687 rmmod nvme_fabrics 00:35:32.687 rmmod nvme_keyring 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1355127 ']' 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1355127 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1355127 ']' 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1355127 00:35:32.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1355127) - No such process 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1355127 is not found' 00:35:32.687 Process with pid 1355127 is not found 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:32.687 12:25:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.217 12:25:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:35.217 00:35:35.217 real 0m35.814s 00:35:35.217 user 0m54.455s 00:35:35.217 sys 0m13.034s 00:35:35.217 12:25:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:35.217 12:25:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:35.217 ************************************ 00:35:35.217 END TEST nvmf_digest 00:35:35.217 ************************************ 00:35:35.217 12:25:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:35.217 12:25:24 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:35:35.217 12:25:24 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:35:35.217 12:25:24 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:35:35.217 12:25:24 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:35.217 12:25:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:35.217 12:25:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:35.217 12:25:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.217 ************************************ 00:35:35.217 START TEST nvmf_bdevperf 00:35:35.217 ************************************ 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:35.217 * Looking for test storage... 00:35:35.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.217 12:25:24 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:35.218 12:25:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.483 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:40.483 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:40.483 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:40.483 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:40.483 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:40.484 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:40.484 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:40.484 Found net devices under 0000:86:00.0: cvl_0_0 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:40.484 Found net devices under 0000:86:00.1: cvl_0_1 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:40.484 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:40.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:40.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:35:40.809 00:35:40.809 --- 10.0.0.2 ping statistics --- 00:35:40.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.809 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:40.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:40.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:35:40.809 00:35:40.809 --- 10.0.0.1 ping statistics --- 00:35:40.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.809 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1360785 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1360785 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1360785 ']' 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:40.809 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.809 [2024-07-15 12:25:30.663637] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:40.809 [2024-07-15 12:25:30.663679] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.809 EAL: No free 2048 kB hugepages reported on node 1 00:35:40.809 [2024-07-15 12:25:30.733362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:40.809 [2024-07-15 12:25:30.774060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.809 [2024-07-15 12:25:30.774118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.809 [2024-07-15 12:25:30.774125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.809 [2024-07-15 12:25:30.774131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.809 [2024-07-15 12:25:30.774136] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.809 [2024-07-15 12:25:30.774270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:40.809 [2024-07-15 12:25:30.774312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.809 [2024-07-15 12:25:30.774312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 [2024-07-15 12:25:30.898713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 Malloc0 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 [2024-07-15 12:25:30.959273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:41.068 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:41.068 { 00:35:41.068 "params": { 00:35:41.069 "name": "Nvme$subsystem", 00:35:41.069 "trtype": "$TEST_TRANSPORT", 00:35:41.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:41.069 "adrfam": "ipv4", 00:35:41.069 "trsvcid": "$NVMF_PORT", 00:35:41.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:41.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:41.069 "hdgst": ${hdgst:-false}, 00:35:41.069 "ddgst": ${ddgst:-false} 00:35:41.069 }, 00:35:41.069 "method": "bdev_nvme_attach_controller" 00:35:41.069 } 00:35:41.069 EOF 00:35:41.069 )") 00:35:41.069 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:41.069 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:41.069 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:41.069 12:25:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:41.069 "params": { 00:35:41.069 "name": "Nvme1", 00:35:41.069 "trtype": "tcp", 00:35:41.069 "traddr": "10.0.0.2", 00:35:41.069 "adrfam": "ipv4", 00:35:41.069 "trsvcid": "4420", 00:35:41.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:41.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:41.069 "hdgst": false, 00:35:41.069 "ddgst": false 00:35:41.069 }, 00:35:41.069 "method": "bdev_nvme_attach_controller" 00:35:41.069 }' 00:35:41.069 [2024-07-15 12:25:31.010741] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:41.069 [2024-07-15 12:25:31.010786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1360813 ] 00:35:41.069 EAL: No free 2048 kB hugepages reported on node 1 00:35:41.327 [2024-07-15 12:25:31.078490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.327 [2024-07-15 12:25:31.118861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.327 Running I/O for 1 seconds... 00:35:42.700 00:35:42.700 Latency(us) 00:35:42.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.700 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.700 Verification LBA range: start 0x0 length 0x4000 00:35:42.700 Nvme1n1 : 1.01 10983.29 42.90 0.00 0.00 11609.34 2407.74 14930.81 00:35:42.700 =================================================================================================================== 00:35:42.700 Total : 10983.29 42.90 0.00 0.00 11609.34 2407.74 14930.81 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1361044 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:42.700 { 00:35:42.700 "params": { 00:35:42.700 "name": "Nvme$subsystem", 00:35:42.700 "trtype": "$TEST_TRANSPORT", 00:35:42.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.700 "adrfam": "ipv4", 00:35:42.700 "trsvcid": "$NVMF_PORT", 00:35:42.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.700 "hdgst": ${hdgst:-false}, 00:35:42.700 "ddgst": ${ddgst:-false} 00:35:42.700 }, 00:35:42.700 "method": "bdev_nvme_attach_controller" 00:35:42.700 } 00:35:42.700 EOF 00:35:42.700 )") 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:42.700 12:25:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:42.700 "params": { 00:35:42.700 "name": "Nvme1", 00:35:42.700 "trtype": "tcp", 00:35:42.700 "traddr": "10.0.0.2", 00:35:42.700 "adrfam": "ipv4", 00:35:42.700 "trsvcid": "4420", 00:35:42.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.700 "hdgst": false, 00:35:42.700 "ddgst": false 00:35:42.700 }, 00:35:42.700 "method": "bdev_nvme_attach_controller" 00:35:42.700 }' 00:35:42.700 [2024-07-15 12:25:32.544131] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:42.700 [2024-07-15 12:25:32.544177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1361044 ] 00:35:42.700 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.700 [2024-07-15 12:25:32.610604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.700 [2024-07-15 12:25:32.647502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.958 Running I/O for 15 seconds... 00:35:46.252 12:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1360785 00:35:46.252 12:25:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:46.252 [2024-07-15 12:25:35.512836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.252 [2024-07-15 12:25:35.512877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.252 [2024-07-15 12:25:35.512896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.252 [2024-07-15 12:25:35.512905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.252 [2024-07-15 12:25:35.512914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.252 [2024-07-15 12:25:35.512922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.252 [2024-07-15 12:25:35.512931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.252 [2024-07-15 12:25:35.512941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.252 [2024-07-15 12:25:35.512950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.252 [2024-07-15 12:25:35.512957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.252 [2024-07-15 12:25:35.512966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.252 [2024-07-15 12:25:35.512977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.252 [2024-07-15 12:25:35.512986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.252 [2024-07-15 12:25:35.512993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.252 [2024-07-15 12:25:35.513002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.252 [2024-07-15 12:25:35.513009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.253 [2024-07-15 12:25:35.513690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.253 [2024-07-15 12:25:35.513696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.513990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.513998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.254 [2024-07-15 12:25:35.514443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.254 [2024-07-15 12:25:35.514454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.255 [2024-07-15 12:25:35.514693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:46.255 [2024-07-15 12:25:35.514707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.255 [2024-07-15 12:25:35.514944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.514951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a54d0 is same with the state(5) to be set 00:35:46.255 [2024-07-15 12:25:35.514960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:46.255 [2024-07-15 12:25:35.514964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:46.255 [2024-07-15 12:25:35.514970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95440 len:8 PRP1 0x0 PRP2 0x0 00:35:46.255 [2024-07-15 12:25:35.514977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.515019] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26a54d0 was disconnected and freed. reset controller. 00:35:46.255 [2024-07-15 12:25:35.515064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:46.255 [2024-07-15 12:25:35.515076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.515083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:46.255 [2024-07-15 12:25:35.515090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.515099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:46.255 [2024-07-15 12:25:35.515105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.515112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:46.255 [2024-07-15 12:25:35.515118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.255 [2024-07-15 12:25:35.515124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.255 [2024-07-15 12:25:35.517935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.255 [2024-07-15 12:25:35.517965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.255 [2024-07-15 12:25:35.518592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.255 [2024-07-15 12:25:35.518608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.255 [2024-07-15 12:25:35.518615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.255 [2024-07-15 12:25:35.518792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.518969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.518978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.518985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.256 [2024-07-15 12:25:35.521823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.256 [2024-07-15 12:25:35.531143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.256 [2024-07-15 12:25:35.531617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.256 [2024-07-15 12:25:35.531635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.256 [2024-07-15 12:25:35.531643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.256 [2024-07-15 12:25:35.531815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.531995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.532005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.532014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.256 [2024-07-15 12:25:35.534722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.256 [2024-07-15 12:25:35.544064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.256 [2024-07-15 12:25:35.544507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.256 [2024-07-15 12:25:35.544524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.256 [2024-07-15 12:25:35.544532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.256 [2024-07-15 12:25:35.544695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.544865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.544874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.544881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.256 [2024-07-15 12:25:35.547570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.256 [2024-07-15 12:25:35.556920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.256 [2024-07-15 12:25:35.557290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.256 [2024-07-15 12:25:35.557307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.256 [2024-07-15 12:25:35.557314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.256 [2024-07-15 12:25:35.557476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.557640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.557649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.557655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.256 [2024-07-15 12:25:35.560364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.256 [2024-07-15 12:25:35.569856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.256 [2024-07-15 12:25:35.570133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.256 [2024-07-15 12:25:35.570150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.256 [2024-07-15 12:25:35.570156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.256 [2024-07-15 12:25:35.570323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.570487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.570496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.570502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.256 [2024-07-15 12:25:35.573183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.256 [2024-07-15 12:25:35.582707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.256 [2024-07-15 12:25:35.583132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.256 [2024-07-15 12:25:35.583175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.256 [2024-07-15 12:25:35.583196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.256 [2024-07-15 12:25:35.583772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.583955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.583964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.583971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.256 [2024-07-15 12:25:35.588936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.256 [2024-07-15 12:25:35.597607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.256 [2024-07-15 12:25:35.598080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.256 [2024-07-15 12:25:35.598124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.256 [2024-07-15 12:25:35.598145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.256 [2024-07-15 12:25:35.598742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.599000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.599013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.599022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.256 [2024-07-15 12:25:35.603071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.256 [2024-07-15 12:25:35.610567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.256 [2024-07-15 12:25:35.610872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.256 [2024-07-15 12:25:35.610889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.256 [2024-07-15 12:25:35.610896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.256 [2024-07-15 12:25:35.611068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.611244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.611254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.611261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.256 [2024-07-15 12:25:35.613935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.256 [2024-07-15 12:25:35.623687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.256 [2024-07-15 12:25:35.624103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.256 [2024-07-15 12:25:35.624121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.256 [2024-07-15 12:25:35.624128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.256 [2024-07-15 12:25:35.624313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.624491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.624501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.624507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.256 [2024-07-15 12:25:35.627344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.256 [2024-07-15 12:25:35.636614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.256 [2024-07-15 12:25:35.636944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.256 [2024-07-15 12:25:35.636962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.256 [2024-07-15 12:25:35.636971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.256 [2024-07-15 12:25:35.637134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.256 [2024-07-15 12:25:35.637304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.256 [2024-07-15 12:25:35.637314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.256 [2024-07-15 12:25:35.637320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.640018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.649678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.650086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.650104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.650111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.650293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.650472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.650482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.650489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.653315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.662838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.663193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.663210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.663217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.663398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.663577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.663587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.663594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.666418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.675926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.676401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.676418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.676426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.676604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.676783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.676797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.676805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.679631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.688965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.689396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.689414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.689421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.689598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.689777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.689787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.689794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.692628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.702128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.702581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.702599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.702606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.702782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.702961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.702971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.702979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.705804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.715327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.715784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.715800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.715807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.715983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.716162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.716172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.716178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.719008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.728386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.728845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.728863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.728870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.729047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.729232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.729242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.729249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.732074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.741575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.741934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.741951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.741958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.742135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.742318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.742327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.742334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.745160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.754682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.755132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.755149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.755156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.755337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.755517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.755527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.755533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.758361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.767871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.768317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.768334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.768342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.768523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.768702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.257 [2024-07-15 12:25:35.768728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.257 [2024-07-15 12:25:35.768735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.257 [2024-07-15 12:25:35.771656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.257 [2024-07-15 12:25:35.780971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.257 [2024-07-15 12:25:35.781323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.257 [2024-07-15 12:25:35.781342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.257 [2024-07-15 12:25:35.781350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.257 [2024-07-15 12:25:35.781526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.257 [2024-07-15 12:25:35.781705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.781716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.781723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.784582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.794098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.794510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.794529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.794536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.794713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.794892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.794902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.794910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.797745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.807259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.807708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.807726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.807733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.807910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.808089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.808098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.808109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.810943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.820298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.820691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.820708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.820716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.820893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.821071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.821081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.821088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.823923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.833426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.833878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.833896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.833904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.834081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.834266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.834277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.834286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.837106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.846674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.847127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.847144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.847152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.847354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.847539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.847550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.847556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.850489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.859753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.860182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.860199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.860206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.860391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.860571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.860581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.860588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.863416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.872955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.873409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.873426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.873434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.873612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.873791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.873800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.873807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.876636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.886148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.886613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.886655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.886677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.887266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.887740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.887752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.887759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.890592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.899217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.899583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.899600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.899607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.899784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.899957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.899967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.899973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.902717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.912278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.912704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.912747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.912768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.913361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.913771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.258 [2024-07-15 12:25:35.913781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.258 [2024-07-15 12:25:35.913787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.258 [2024-07-15 12:25:35.916466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.258 [2024-07-15 12:25:35.925079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.258 [2024-07-15 12:25:35.925515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.258 [2024-07-15 12:25:35.925532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.258 [2024-07-15 12:25:35.925539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.258 [2024-07-15 12:25:35.925711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.258 [2024-07-15 12:25:35.925885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:35.925895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:35.925902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:35.928544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:35.937979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:35.938382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:35.938400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:35.938407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:35.938579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:35.938752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:35.938761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:35.938771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:35.941474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:35.950910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:35.951384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:35.951401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:35.951408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:35.951580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:35.951753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:35.951763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:35.951769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:35.954417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:35.963896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:35.964359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:35.964376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:35.964383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:35.964545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:35.964708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:35.964717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:35.964723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:35.967453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:35.976854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:35.977261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:35.977279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:35.977286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:35.977459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:35.977639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:35.977649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:35.977655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:35.980326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:35.989856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:35.990282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:35.990305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:35.990312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:35.990496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:35.990661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:35.990671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:35.990676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:35.993287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:36.002949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:36.003342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:36.003360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:36.003367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:36.003546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:36.003710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:36.003719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:36.003726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:36.006340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:36.015936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:36.016365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:36.016383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:36.016391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:36.016566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:36.016730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:36.016739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:36.016745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:36.019359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:36.029060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:36.029446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:36.029464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:36.029471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:36.029652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:36.029821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:36.029831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:36.029837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:36.032519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:36.042045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:36.042468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:36.042506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:36.042529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:36.043083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:36.043266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:36.043275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:36.043282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:36.046000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:36.055051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:36.055486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:36.055503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:36.055509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:36.055671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:36.055834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.259 [2024-07-15 12:25:36.055843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.259 [2024-07-15 12:25:36.055849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.259 [2024-07-15 12:25:36.058533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.259 [2024-07-15 12:25:36.067869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.259 [2024-07-15 12:25:36.068316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.259 [2024-07-15 12:25:36.068357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.259 [2024-07-15 12:25:36.068379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.259 [2024-07-15 12:25:36.068875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.259 [2024-07-15 12:25:36.069039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.069047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.069053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.071743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.080699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.081110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.081146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.081170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.081763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.082039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.082049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.082056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.084682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.093534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.093954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.093970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.093977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.094140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.094327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.094337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.094344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.097064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.106456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.106832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.106849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.106855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.107018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.107180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.107190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.107196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.109862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.119277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.119685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.119703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.119712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.119875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.120038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.120048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.120054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.122739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.132203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.132654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.132697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.132718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.133229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.133417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.133428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.133435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.136105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.144999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.145440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.145484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.145506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.146070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.146239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.146249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.146272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.148936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.157901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.158352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.158396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.158416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.158994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.159221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.159239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.159246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.161925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.170725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.171167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.171210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.171246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.171826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.172392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.172402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.172409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.175011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.183511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.183935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.183977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.183998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.184530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.184704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.184712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.184718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.187457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.196435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.196864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.196880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.196888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.197050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.197214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.260 [2024-07-15 12:25:36.197222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.260 [2024-07-15 12:25:36.197234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.260 [2024-07-15 12:25:36.199913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.260 [2024-07-15 12:25:36.209412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.260 [2024-07-15 12:25:36.209871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.260 [2024-07-15 12:25:36.209905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.260 [2024-07-15 12:25:36.209928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.260 [2024-07-15 12:25:36.210506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.260 [2024-07-15 12:25:36.210681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.261 [2024-07-15 12:25:36.210690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.261 [2024-07-15 12:25:36.210696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.261 [2024-07-15 12:25:36.213340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.261 [2024-07-15 12:25:36.222295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.261 [2024-07-15 12:25:36.222743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.261 [2024-07-15 12:25:36.222785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.261 [2024-07-15 12:25:36.222808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.261 [2024-07-15 12:25:36.223329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.261 [2024-07-15 12:25:36.223495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.261 [2024-07-15 12:25:36.223505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.261 [2024-07-15 12:25:36.223511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.261 [2024-07-15 12:25:36.226137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.261 [2024-07-15 12:25:36.235188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.261 [2024-07-15 12:25:36.235550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.261 [2024-07-15 12:25:36.235568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.261 [2024-07-15 12:25:36.235574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.261 [2024-07-15 12:25:36.235737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.261 [2024-07-15 12:25:36.235899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.261 [2024-07-15 12:25:36.235908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.261 [2024-07-15 12:25:36.235915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.261 [2024-07-15 12:25:36.238682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.521 [2024-07-15 12:25:36.248111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.521 [2024-07-15 12:25:36.248492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.521 [2024-07-15 12:25:36.248508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.521 [2024-07-15 12:25:36.248516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.521 [2024-07-15 12:25:36.248681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.521 [2024-07-15 12:25:36.248844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.521 [2024-07-15 12:25:36.248854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.521 [2024-07-15 12:25:36.248859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.521 [2024-07-15 12:25:36.251556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.521 [2024-07-15 12:25:36.261032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.521 [2024-07-15 12:25:36.261455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.521 [2024-07-15 12:25:36.261472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.521 [2024-07-15 12:25:36.261480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.521 [2024-07-15 12:25:36.261651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.521 [2024-07-15 12:25:36.261825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.521 [2024-07-15 12:25:36.261835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.521 [2024-07-15 12:25:36.261841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.521 [2024-07-15 12:25:36.264631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.521 [2024-07-15 12:25:36.274031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.521 [2024-07-15 12:25:36.274434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.521 [2024-07-15 12:25:36.274452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.521 [2024-07-15 12:25:36.274459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.521 [2024-07-15 12:25:36.274642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.521 [2024-07-15 12:25:36.274806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.521 [2024-07-15 12:25:36.274816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.521 [2024-07-15 12:25:36.274822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.521 [2024-07-15 12:25:36.277636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.521 [2024-07-15 12:25:36.287048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.521 [2024-07-15 12:25:36.287484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.521 [2024-07-15 12:25:36.287526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.521 [2024-07-15 12:25:36.287548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.521 [2024-07-15 12:25:36.288126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.521 [2024-07-15 12:25:36.288721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.521 [2024-07-15 12:25:36.288744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.521 [2024-07-15 12:25:36.288753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.521 [2024-07-15 12:25:36.291364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.521 [2024-07-15 12:25:36.299987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.521 [2024-07-15 12:25:36.300436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.521 [2024-07-15 12:25:36.300480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.521 [2024-07-15 12:25:36.300501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.521 [2024-07-15 12:25:36.301078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.521 [2024-07-15 12:25:36.301670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.521 [2024-07-15 12:25:36.301706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.521 [2024-07-15 12:25:36.301713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.521 [2024-07-15 12:25:36.304352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.521 [2024-07-15 12:25:36.312989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.521 [2024-07-15 12:25:36.313449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.521 [2024-07-15 12:25:36.313492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.521 [2024-07-15 12:25:36.313513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.521 [2024-07-15 12:25:36.314091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.521 [2024-07-15 12:25:36.314686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.521 [2024-07-15 12:25:36.314712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.521 [2024-07-15 12:25:36.314733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.521 [2024-07-15 12:25:36.317422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.521 [2024-07-15 12:25:36.325925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.521 [2024-07-15 12:25:36.326319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.521 [2024-07-15 12:25:36.326362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.521 [2024-07-15 12:25:36.326384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.521 [2024-07-15 12:25:36.326890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.521 [2024-07-15 12:25:36.327063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.521 [2024-07-15 12:25:36.327073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.521 [2024-07-15 12:25:36.327079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.521 [2024-07-15 12:25:36.329706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.521 [2024-07-15 12:25:36.338756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.521 [2024-07-15 12:25:36.339191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.521 [2024-07-15 12:25:36.339207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.521 [2024-07-15 12:25:36.339214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.521 [2024-07-15 12:25:36.339406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.521 [2024-07-15 12:25:36.339581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.521 [2024-07-15 12:25:36.339591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.521 [2024-07-15 12:25:36.339597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.521 [2024-07-15 12:25:36.342278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.521 [2024-07-15 12:25:36.351686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.521 [2024-07-15 12:25:36.352146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.521 [2024-07-15 12:25:36.352187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.352209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.352749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.353109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.353127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.353141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.359381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.366562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.367092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.367113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.367124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.367382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.367638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.367651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.367661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.371719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.379551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.379987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.380040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.380062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.380654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.380864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.380874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.380881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.383586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.392444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.392882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.392898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.392906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.393067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.393235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.393244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.393267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.395987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.405413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.405776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.405793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.405800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.405962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.406126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.406136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.406143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.408833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.418350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.418790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.418831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.418853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.419443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.419985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.419995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.420001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.422597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.431195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.431631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.431649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.431656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.431827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.432003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.432012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.432018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.434704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.443973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.444412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.444429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.444436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.444599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.444763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.444772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.444778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.447465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.456781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.457223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.457280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.457302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.457880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.458219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.458232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.458238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.460916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.469563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.470006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.470056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.470077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.522 [2024-07-15 12:25:36.470678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.522 [2024-07-15 12:25:36.471199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.522 [2024-07-15 12:25:36.471208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.522 [2024-07-15 12:25:36.471214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.522 [2024-07-15 12:25:36.473841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.522 [2024-07-15 12:25:36.482483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.522 [2024-07-15 12:25:36.482838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.522 [2024-07-15 12:25:36.482855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.522 [2024-07-15 12:25:36.482862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.523 [2024-07-15 12:25:36.483024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.523 [2024-07-15 12:25:36.483187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.523 [2024-07-15 12:25:36.483196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.523 [2024-07-15 12:25:36.483203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.523 [2024-07-15 12:25:36.485941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.523 [2024-07-15 12:25:36.495304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.523 [2024-07-15 12:25:36.495740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.523 [2024-07-15 12:25:36.495756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.523 [2024-07-15 12:25:36.495763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.523 [2024-07-15 12:25:36.495925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.523 [2024-07-15 12:25:36.496089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.523 [2024-07-15 12:25:36.496099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.523 [2024-07-15 12:25:36.496105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.523 [2024-07-15 12:25:36.498789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.523 [2024-07-15 12:25:36.508192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.523 [2024-07-15 12:25:36.508624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.523 [2024-07-15 12:25:36.508679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.523 [2024-07-15 12:25:36.508701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.523 [2024-07-15 12:25:36.509239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.523 [2024-07-15 12:25:36.509433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.523 [2024-07-15 12:25:36.509443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.523 [2024-07-15 12:25:36.509450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.523 [2024-07-15 12:25:36.512102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.783 [2024-07-15 12:25:36.521274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.783 [2024-07-15 12:25:36.521707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.783 [2024-07-15 12:25:36.521722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.783 [2024-07-15 12:25:36.521729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.783 [2024-07-15 12:25:36.521891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.783 [2024-07-15 12:25:36.522054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.783 [2024-07-15 12:25:36.522063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.783 [2024-07-15 12:25:36.522070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.783 [2024-07-15 12:25:36.524820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.783 [2024-07-15 12:25:36.534335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.783 [2024-07-15 12:25:36.534780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.783 [2024-07-15 12:25:36.534798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.783 [2024-07-15 12:25:36.534805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.783 [2024-07-15 12:25:36.534982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.783 [2024-07-15 12:25:36.535162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.783 [2024-07-15 12:25:36.535172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.783 [2024-07-15 12:25:36.535178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.783 [2024-07-15 12:25:36.538021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.783 [2024-07-15 12:25:36.547331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.783 [2024-07-15 12:25:36.547788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.783 [2024-07-15 12:25:36.547804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.783 [2024-07-15 12:25:36.547811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.783 [2024-07-15 12:25:36.547973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.783 [2024-07-15 12:25:36.548136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.783 [2024-07-15 12:25:36.548145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.783 [2024-07-15 12:25:36.548152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.783 [2024-07-15 12:25:36.550844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.783 [2024-07-15 12:25:36.560258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.783 [2024-07-15 12:25:36.560707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.783 [2024-07-15 12:25:36.560749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.783 [2024-07-15 12:25:36.560772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.783 [2024-07-15 12:25:36.561267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.783 [2024-07-15 12:25:36.561433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.783 [2024-07-15 12:25:36.561442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.783 [2024-07-15 12:25:36.561448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.783 [2024-07-15 12:25:36.564034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.783 [2024-07-15 12:25:36.573140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.783 [2024-07-15 12:25:36.573586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.783 [2024-07-15 12:25:36.573628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.783 [2024-07-15 12:25:36.573650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.783 [2024-07-15 12:25:36.574242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.783 [2024-07-15 12:25:36.574573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.783 [2024-07-15 12:25:36.574584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.783 [2024-07-15 12:25:36.574590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.783 [2024-07-15 12:25:36.580744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.783 [2024-07-15 12:25:36.588154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.783 [2024-07-15 12:25:36.588689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.783 [2024-07-15 12:25:36.588710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.783 [2024-07-15 12:25:36.588720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.783 [2024-07-15 12:25:36.588972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.783 [2024-07-15 12:25:36.589233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.783 [2024-07-15 12:25:36.589246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.783 [2024-07-15 12:25:36.589256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.783 [2024-07-15 12:25:36.593301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.783 [2024-07-15 12:25:36.601082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.783 [2024-07-15 12:25:36.601541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.601583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.601611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.602190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.602422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.602432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.602438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.605146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.613897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.614305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.614322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.614329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.614492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.614657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.614666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.614672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.617356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.626825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.627286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.627328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.627349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.627926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.628461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.628471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.628477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.631133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.639724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.640159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.640175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.640182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.640373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.640545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.640561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.640568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.643216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.652633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.653077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.653119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.653141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.653636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.653801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.653811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.653817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.656443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.665557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.666010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.666027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.666034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.666196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.666389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.666399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.666406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.669116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.678503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.678967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.679009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.679031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.679540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.679714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.679723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.679730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.682482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.691485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.691923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.691939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.691946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.692108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.692293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.692304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.692310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.694976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.704395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.704757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.704773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.704780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.704941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.705104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.705114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.705120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.707754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.717271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.717681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.717697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.717704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.717867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.718030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.718039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.718045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.720726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.730191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.784 [2024-07-15 12:25:36.730628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.784 [2024-07-15 12:25:36.730644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.784 [2024-07-15 12:25:36.730651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.784 [2024-07-15 12:25:36.730816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.784 [2024-07-15 12:25:36.730979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.784 [2024-07-15 12:25:36.730989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.784 [2024-07-15 12:25:36.730995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.784 [2024-07-15 12:25:36.733680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.784 [2024-07-15 12:25:36.743145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.785 [2024-07-15 12:25:36.743567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.785 [2024-07-15 12:25:36.743610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.785 [2024-07-15 12:25:36.743631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.785 [2024-07-15 12:25:36.744017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.785 [2024-07-15 12:25:36.744181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.785 [2024-07-15 12:25:36.744190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.785 [2024-07-15 12:25:36.744196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.785 [2024-07-15 12:25:36.746882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.785 [2024-07-15 12:25:36.756053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.785 [2024-07-15 12:25:36.756483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.785 [2024-07-15 12:25:36.756518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.785 [2024-07-15 12:25:36.756541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.785 [2024-07-15 12:25:36.757119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.785 [2024-07-15 12:25:36.757710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.785 [2024-07-15 12:25:36.757736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.785 [2024-07-15 12:25:36.757757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.785 [2024-07-15 12:25:36.760467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:46.785 [2024-07-15 12:25:36.768956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:46.785 [2024-07-15 12:25:36.769366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.785 [2024-07-15 12:25:36.769383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:46.785 [2024-07-15 12:25:36.769390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:46.785 [2024-07-15 12:25:36.769553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:46.785 [2024-07-15 12:25:36.769716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:46.785 [2024-07-15 12:25:36.769725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:46.785 [2024-07-15 12:25:36.769735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:46.785 [2024-07-15 12:25:36.772424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.781978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.782432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.782476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.782498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.782937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.783111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.783121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.783127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.785961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.794908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.795328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.795346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.795353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.795529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.795696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.795705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.795711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.798316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.807805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.808236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.808269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.808278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.808458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.808622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.808631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.808637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.811345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.820779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.821205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.821232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.821239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.821428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.821601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.821611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.821618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.824270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.833786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.834149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.834165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.834172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.834341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.834505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.834515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.834521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.837248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.846682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.847085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.847102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.847108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.847277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.847440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.847449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.847455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.850151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.859527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.859949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.859965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.859972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.860135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.860327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.860338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.860345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.863063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.872478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.872855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.872897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.872919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.873365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.873530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.873540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.873546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.876297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.885331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.885774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.885817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.885839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.886356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.886530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.886540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.886546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.889253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.045 [2024-07-15 12:25:36.898204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.045 [2024-07-15 12:25:36.898651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.045 [2024-07-15 12:25:36.898693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.045 [2024-07-15 12:25:36.898715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.045 [2024-07-15 12:25:36.899185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.045 [2024-07-15 12:25:36.899375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.045 [2024-07-15 12:25:36.899385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.045 [2024-07-15 12:25:36.899392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.045 [2024-07-15 12:25:36.902056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:36.911004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:36.911440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:36.911457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:36.911464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:36.911626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:36.911788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:36.911797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:36.911803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:36.914492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:36.923826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:36.924257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:36.924274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:36.924281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:36.924443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:36.924607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:36.924616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:36.924623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:36.927210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:36.936723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:36.937056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:36.937072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:36.937079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:36.937264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:36.937437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:36.937447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:36.937453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:36.940107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:36.949558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:36.950006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:36.950048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:36.950077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:36.950565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:36.950739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:36.950748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:36.950755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:36.953464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:36.962420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:36.962783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:36.962800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:36.962806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:36.962968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:36.963131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:36.963140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:36.963145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:36.965829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:36.975246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:36.975614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:36.975630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:36.975636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:36.975799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:36.975962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:36.975971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:36.975977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:36.978663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:36.988135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:36.988568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:36.988585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:36.988592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:36.988763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:36.988935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:36.988947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:36.988953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:36.991664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:37.000987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:37.001419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:37.001468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:37.001490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:37.002068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:37.002628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:37.002638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:37.002646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:37.005347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:37.013942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:37.014320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:37.014337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:37.014344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:37.014519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:37.014682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:37.014692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:37.014698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:37.017407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:37.026836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:37.027239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:37.027256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:37.027264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:37.027435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:37.027612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:37.027622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.046 [2024-07-15 12:25:37.027628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.046 [2024-07-15 12:25:37.030214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.046 [2024-07-15 12:25:37.039873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.046 [2024-07-15 12:25:37.040222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.046 [2024-07-15 12:25:37.040266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.046 [2024-07-15 12:25:37.040289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.046 [2024-07-15 12:25:37.040871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.046 [2024-07-15 12:25:37.041463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.046 [2024-07-15 12:25:37.041490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.047 [2024-07-15 12:25:37.041511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.306 [2024-07-15 12:25:37.044410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.306 [2024-07-15 12:25:37.052917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.306 [2024-07-15 12:25:37.053371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-15 12:25:37.053388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.306 [2024-07-15 12:25:37.053397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.306 [2024-07-15 12:25:37.053574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.306 [2024-07-15 12:25:37.053753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.306 [2024-07-15 12:25:37.053763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.306 [2024-07-15 12:25:37.053770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.306 [2024-07-15 12:25:37.056597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.306 [2024-07-15 12:25:37.066119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.306 [2024-07-15 12:25:37.066571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.306 [2024-07-15 12:25:37.066588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.066595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.066772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.066952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.066961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.066968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.069789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.079257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.079716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.079733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.079743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.079921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.080100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.080109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.080116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.082941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.092303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.092734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.092751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.092758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.092936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.093115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.093125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.093131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.095955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.105450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.105892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.105909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.105916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.106093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.106277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.106287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.106294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.109118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.118625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.119073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.119090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.119098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.119286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.119471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.119484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.119491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.122387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.131659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.132108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.132126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.132133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.132316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.132495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.132505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.132511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.135336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.144837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.145292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.145310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.145318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.145496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.145674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.145684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.145691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.148514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.158022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.158482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.158499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.158507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.158683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.158862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.158871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.158878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.161709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.171220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.171621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.171639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.171647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.171824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.172002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.172012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.172019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.174846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.184371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.184825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.184842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.184850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.185026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.185206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.185215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.185222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.188050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.197572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.197941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.307 [2024-07-15 12:25:37.197958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.307 [2024-07-15 12:25:37.197965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.307 [2024-07-15 12:25:37.198142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.307 [2024-07-15 12:25:37.198326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.307 [2024-07-15 12:25:37.198336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.307 [2024-07-15 12:25:37.198344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.307 [2024-07-15 12:25:37.201172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.307 [2024-07-15 12:25:37.210699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.307 [2024-07-15 12:25:37.211042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-15 12:25:37.211059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.308 [2024-07-15 12:25:37.211066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.308 [2024-07-15 12:25:37.211250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.308 [2024-07-15 12:25:37.211430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.308 [2024-07-15 12:25:37.211440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.308 [2024-07-15 12:25:37.211446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.308 [2024-07-15 12:25:37.214267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.308 [2024-07-15 12:25:37.223772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.308 [2024-07-15 12:25:37.224144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-15 12:25:37.224161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.308 [2024-07-15 12:25:37.224168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.308 [2024-07-15 12:25:37.224351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.308 [2024-07-15 12:25:37.224531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.308 [2024-07-15 12:25:37.224541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.308 [2024-07-15 12:25:37.224549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.308 [2024-07-15 12:25:37.227380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.308 [2024-07-15 12:25:37.236900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.308 [2024-07-15 12:25:37.237278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-15 12:25:37.237296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.308 [2024-07-15 12:25:37.237303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.308 [2024-07-15 12:25:37.237480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.308 [2024-07-15 12:25:37.237659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.308 [2024-07-15 12:25:37.237670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.308 [2024-07-15 12:25:37.237676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.308 [2024-07-15 12:25:37.240508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.308 [2024-07-15 12:25:37.250015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.308 [2024-07-15 12:25:37.250463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-15 12:25:37.250481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.308 [2024-07-15 12:25:37.250488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.308 [2024-07-15 12:25:37.250665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.308 [2024-07-15 12:25:37.250843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.308 [2024-07-15 12:25:37.250853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.308 [2024-07-15 12:25:37.250864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.308 [2024-07-15 12:25:37.253696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.308 [2024-07-15 12:25:37.263202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.308 [2024-07-15 12:25:37.263650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-15 12:25:37.263667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.308 [2024-07-15 12:25:37.263675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.308 [2024-07-15 12:25:37.263852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.308 [2024-07-15 12:25:37.264030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.308 [2024-07-15 12:25:37.264040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.308 [2024-07-15 12:25:37.264046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.308 [2024-07-15 12:25:37.266873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.308 [2024-07-15 12:25:37.276394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.308 [2024-07-15 12:25:37.276768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-15 12:25:37.276786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.308 [2024-07-15 12:25:37.276793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.308 [2024-07-15 12:25:37.276970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.308 [2024-07-15 12:25:37.277149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.308 [2024-07-15 12:25:37.277159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.308 [2024-07-15 12:25:37.277165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.308 [2024-07-15 12:25:37.279992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.308 [2024-07-15 12:25:37.289518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.308 [2024-07-15 12:25:37.289904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-15 12:25:37.289922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.308 [2024-07-15 12:25:37.289929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.308 [2024-07-15 12:25:37.290106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.308 [2024-07-15 12:25:37.290289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.308 [2024-07-15 12:25:37.290299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.308 [2024-07-15 12:25:37.290306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.308 [2024-07-15 12:25:37.293130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.308 [2024-07-15 12:25:37.302642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.308 [2024-07-15 12:25:37.302996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.308 [2024-07-15 12:25:37.303019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.308 [2024-07-15 12:25:37.303026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.308 [2024-07-15 12:25:37.303203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.308 [2024-07-15 12:25:37.303387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.308 [2024-07-15 12:25:37.303397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.308 [2024-07-15 12:25:37.303404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.306228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.569 [2024-07-15 12:25:37.315731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.569 [2024-07-15 12:25:37.316110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.569 [2024-07-15 12:25:37.316128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.569 [2024-07-15 12:25:37.316135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.569 [2024-07-15 12:25:37.316317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.569 [2024-07-15 12:25:37.316500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.569 [2024-07-15 12:25:37.316510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.569 [2024-07-15 12:25:37.316517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.319348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.569 [2024-07-15 12:25:37.328853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.569 [2024-07-15 12:25:37.329286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.569 [2024-07-15 12:25:37.329303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.569 [2024-07-15 12:25:37.329311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.569 [2024-07-15 12:25:37.329487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.569 [2024-07-15 12:25:37.329666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.569 [2024-07-15 12:25:37.329676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.569 [2024-07-15 12:25:37.329682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.332508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.569 [2024-07-15 12:25:37.342015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.569 [2024-07-15 12:25:37.342449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.569 [2024-07-15 12:25:37.342466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.569 [2024-07-15 12:25:37.342473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.569 [2024-07-15 12:25:37.342650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.569 [2024-07-15 12:25:37.342833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.569 [2024-07-15 12:25:37.342843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.569 [2024-07-15 12:25:37.342849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.345682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.569 [2024-07-15 12:25:37.355210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.569 [2024-07-15 12:25:37.355564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.569 [2024-07-15 12:25:37.355581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.569 [2024-07-15 12:25:37.355589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.569 [2024-07-15 12:25:37.355765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.569 [2024-07-15 12:25:37.355943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.569 [2024-07-15 12:25:37.355953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.569 [2024-07-15 12:25:37.355959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.358785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.569 [2024-07-15 12:25:37.368288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.569 [2024-07-15 12:25:37.368676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.569 [2024-07-15 12:25:37.368693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.569 [2024-07-15 12:25:37.368700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.569 [2024-07-15 12:25:37.368877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.569 [2024-07-15 12:25:37.369055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.569 [2024-07-15 12:25:37.369065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.569 [2024-07-15 12:25:37.369072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.371899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.569 [2024-07-15 12:25:37.381433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.569 [2024-07-15 12:25:37.381884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.569 [2024-07-15 12:25:37.381901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.569 [2024-07-15 12:25:37.381908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.569 [2024-07-15 12:25:37.382085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.569 [2024-07-15 12:25:37.382289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.569 [2024-07-15 12:25:37.382300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.569 [2024-07-15 12:25:37.382307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.385179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.569 [2024-07-15 12:25:37.394626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.569 [2024-07-15 12:25:37.395049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.569 [2024-07-15 12:25:37.395066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.569 [2024-07-15 12:25:37.395073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.569 [2024-07-15 12:25:37.395254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.569 [2024-07-15 12:25:37.395432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.569 [2024-07-15 12:25:37.395442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.569 [2024-07-15 12:25:37.395449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.398276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.569 [2024-07-15 12:25:37.407802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.569 [2024-07-15 12:25:37.408253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.569 [2024-07-15 12:25:37.408271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.569 [2024-07-15 12:25:37.408278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.569 [2024-07-15 12:25:37.408455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.569 [2024-07-15 12:25:37.408634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.569 [2024-07-15 12:25:37.408644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.569 [2024-07-15 12:25:37.408652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.411484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.569 [2024-07-15 12:25:37.420839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.569 [2024-07-15 12:25:37.421207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.569 [2024-07-15 12:25:37.421230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.569 [2024-07-15 12:25:37.421237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.569 [2024-07-15 12:25:37.421414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.569 [2024-07-15 12:25:37.421592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.569 [2024-07-15 12:25:37.421602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.569 [2024-07-15 12:25:37.421608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.569 [2024-07-15 12:25:37.424437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.433943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.434315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.434333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.434344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.434540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.434724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.434734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.434741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.437620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.447019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.447366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.447383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.447390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.447766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.447946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.447957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.447964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.450805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.460158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.460634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.460678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.460700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.461254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.461434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.461444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.461451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.464272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.473244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.473692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.473709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.473716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.474301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.474481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.474494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.474503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.477285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.486169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.486602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.486619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.486626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.486797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.486970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.486980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.486986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.489668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.499075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.499336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.499353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.499361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.499524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.499686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.499695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.499702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.502389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.511959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.512292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.512309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.512316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.512479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.512642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.512651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.512657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.515342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.524847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.525293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.525334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.525355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.525749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.525914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.525923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.525929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.528614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.537855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.538238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.538283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.538304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.538732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.538896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.538905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.538911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.541594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.550843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.551276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.551292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.551299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.551461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.551625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.551633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.551640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.570 [2024-07-15 12:25:37.554269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.570 [2024-07-15 12:25:37.563800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.570 [2024-07-15 12:25:37.564223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.570 [2024-07-15 12:25:37.564246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.570 [2024-07-15 12:25:37.564254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.570 [2024-07-15 12:25:37.564429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.570 [2024-07-15 12:25:37.564603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.570 [2024-07-15 12:25:37.564612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.570 [2024-07-15 12:25:37.564619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.567369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.576722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.577177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.577220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.577258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.577719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.577892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.577902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.577910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.580541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.589548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.589915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.589957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.589979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.590572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.591034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.591043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.591049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.593632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.602335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.602779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.602822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.602844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.603215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.603406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.603416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.603426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.606082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.615188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.615558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.615574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.615580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.615743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.615906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.615915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.615921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.618604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.628112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.628479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.628496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.628502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.628665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.628827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.628837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.628842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.631522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.640991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.641339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.641356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.641363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.641559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.641723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.641733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.641739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.644423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.653886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.654318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.654333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.654340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.654503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.654666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.654676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.654682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.657366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.666867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.667287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.667303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.667312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.667474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.667637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.667646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.667652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.670340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.679926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.680355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.680394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.680417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.680994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.681312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.681322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.681331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.831 [2024-07-15 12:25:37.684117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.831 [2024-07-15 12:25:37.693130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.831 [2024-07-15 12:25:37.693520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.831 [2024-07-15 12:25:37.693537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.831 [2024-07-15 12:25:37.693545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.831 [2024-07-15 12:25:37.693725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.831 [2024-07-15 12:25:37.693903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.831 [2024-07-15 12:25:37.693913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.831 [2024-07-15 12:25:37.693919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.696749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.706258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.706721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.706762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.706784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.707254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.707433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.707444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.707450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.710272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.719307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.719692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.719709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.719715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.719886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.720058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.720068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.720074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.722884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.732103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.732568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.732610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.732632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.733175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.733353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.733363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.733374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.736101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.744937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.745370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.745386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.745393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.745555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.745719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.745729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.745735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.748422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.757836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.758234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.758251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.758258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.758421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.758585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.758595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.758601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.761188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.770711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.771166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.771209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.771244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.771824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.772429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.772442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.772452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.775089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.783529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.783988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.784037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.784059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.784652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.785243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.785270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.785291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.788021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.796343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.796824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.796840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.796847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.797008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.797171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.797180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.797186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.800016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.809270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.809642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.809658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.809666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.809837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.810009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.810019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.810025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.812704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:47.832 [2024-07-15 12:25:37.822112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:47.832 [2024-07-15 12:25:37.822492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.832 [2024-07-15 12:25:37.822545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:47.832 [2024-07-15 12:25:37.822570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:47.832 [2024-07-15 12:25:37.823151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:47.832 [2024-07-15 12:25:37.823423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:47.832 [2024-07-15 12:25:37.823433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:47.832 [2024-07-15 12:25:37.823439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:47.832 [2024-07-15 12:25:37.826177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.093 [2024-07-15 12:25:37.835128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.093 [2024-07-15 12:25:37.835543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.093 [2024-07-15 12:25:37.835586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.093 [2024-07-15 12:25:37.835609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.093 [2024-07-15 12:25:37.836187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.093 [2024-07-15 12:25:37.836573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.093 [2024-07-15 12:25:37.836583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.093 [2024-07-15 12:25:37.836589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.093 [2024-07-15 12:25:37.839236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.093 [2024-07-15 12:25:37.847934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.093 [2024-07-15 12:25:37.848373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.093 [2024-07-15 12:25:37.848390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.093 [2024-07-15 12:25:37.848398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.093 [2024-07-15 12:25:37.848570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.093 [2024-07-15 12:25:37.848743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.093 [2024-07-15 12:25:37.848752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.093 [2024-07-15 12:25:37.848758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.093 [2024-07-15 12:25:37.851538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.093 [2024-07-15 12:25:37.860715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.093 [2024-07-15 12:25:37.861057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.093 [2024-07-15 12:25:37.861072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.093 [2024-07-15 12:25:37.861079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.093 [2024-07-15 12:25:37.861249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.093 [2024-07-15 12:25:37.861436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.093 [2024-07-15 12:25:37.861447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.093 [2024-07-15 12:25:37.861453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.093 [2024-07-15 12:25:37.864116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.093 [2024-07-15 12:25:37.873627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.093 [2024-07-15 12:25:37.874067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.093 [2024-07-15 12:25:37.874084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.093 [2024-07-15 12:25:37.874091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.093 [2024-07-15 12:25:37.874274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.093 [2024-07-15 12:25:37.874448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.093 [2024-07-15 12:25:37.874458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.093 [2024-07-15 12:25:37.874465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.093 [2024-07-15 12:25:37.877124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.093 [2024-07-15 12:25:37.886526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.093 [2024-07-15 12:25:37.886936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.093 [2024-07-15 12:25:37.886952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.093 [2024-07-15 12:25:37.886959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.093 [2024-07-15 12:25:37.887141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.093 [2024-07-15 12:25:37.887320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.093 [2024-07-15 12:25:37.887330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.093 [2024-07-15 12:25:37.887336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.093 [2024-07-15 12:25:37.889996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.093 [2024-07-15 12:25:37.899411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.093 [2024-07-15 12:25:37.899843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.093 [2024-07-15 12:25:37.899860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.093 [2024-07-15 12:25:37.899866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.093 [2024-07-15 12:25:37.900028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.093 [2024-07-15 12:25:37.900193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.093 [2024-07-15 12:25:37.900203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.093 [2024-07-15 12:25:37.900208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.093 [2024-07-15 12:25:37.902894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.093 [2024-07-15 12:25:37.912206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.093 [2024-07-15 12:25:37.912653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.093 [2024-07-15 12:25:37.912697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.093 [2024-07-15 12:25:37.912726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.093 [2024-07-15 12:25:37.913297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.093 [2024-07-15 12:25:37.913471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.093 [2024-07-15 12:25:37.913481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.093 [2024-07-15 12:25:37.913487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.093 [2024-07-15 12:25:37.916138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.093 [2024-07-15 12:25:37.925090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.093 [2024-07-15 12:25:37.925527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.093 [2024-07-15 12:25:37.925543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.093 [2024-07-15 12:25:37.925551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:37.925713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:37.925876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:37.925885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:37.925891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:37.928572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:37.937943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:37.938394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:37.938436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:37.938458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:37.939036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:37.939634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:37.939661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:37.939691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:37.942378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:37.950765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:37.951208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:37.951271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:37.951293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:37.951723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:37.951887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:37.951902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:37.951908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:37.954601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:37.963553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:37.963985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:37.964027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:37.964048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:37.964538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:37.964712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:37.964722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:37.964728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:37.967367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:37.976475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:37.976807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:37.976824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:37.976830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:37.976992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:37.977155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:37.977164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:37.977170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:37.979853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:37.989472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:37.989929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:37.989971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:37.989992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:37.990583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:37.990789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:37.990799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:37.990805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:37.993441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:38.002389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:38.002705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:38.002748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:38.002769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:38.003363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:38.003866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:38.003876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:38.003882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:38.006468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:38.015268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:38.015701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:38.015717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:38.015723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:38.015885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:38.016047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:38.016056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:38.016063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:38.018748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:38.028159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:38.028491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:38.028507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:38.028514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:38.028675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:38.028838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:38.028848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:38.028854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:38.031547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:38.041054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:38.041416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:38.041433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:38.041439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:38.041604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:38.041767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:38.041775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:38.041782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:38.044479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:38.053947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:38.054298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:38.054315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.094 [2024-07-15 12:25:38.054322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.094 [2024-07-15 12:25:38.054484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.094 [2024-07-15 12:25:38.054646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.094 [2024-07-15 12:25:38.054656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.094 [2024-07-15 12:25:38.054662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.094 [2024-07-15 12:25:38.057447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.094 [2024-07-15 12:25:38.066861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.094 [2024-07-15 12:25:38.067293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.094 [2024-07-15 12:25:38.067309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.095 [2024-07-15 12:25:38.067318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.095 [2024-07-15 12:25:38.067489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.095 [2024-07-15 12:25:38.067663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.095 [2024-07-15 12:25:38.067673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.095 [2024-07-15 12:25:38.067679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.095 [2024-07-15 12:25:38.070322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.095 [2024-07-15 12:25:38.079819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.095 [2024-07-15 12:25:38.080180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.095 [2024-07-15 12:25:38.080196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.095 [2024-07-15 12:25:38.080204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.095 [2024-07-15 12:25:38.080411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.095 [2024-07-15 12:25:38.080591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.095 [2024-07-15 12:25:38.080601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.095 [2024-07-15 12:25:38.080611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.095 [2024-07-15 12:25:38.083300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.355 [2024-07-15 12:25:38.092818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.355 [2024-07-15 12:25:38.093184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.355 [2024-07-15 12:25:38.093240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.355 [2024-07-15 12:25:38.093263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.355 [2024-07-15 12:25:38.093841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.355 [2024-07-15 12:25:38.094037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.355 [2024-07-15 12:25:38.094046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.355 [2024-07-15 12:25:38.094052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.355 [2024-07-15 12:25:38.096782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.355 [2024-07-15 12:25:38.105742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.355 [2024-07-15 12:25:38.106107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.355 [2024-07-15 12:25:38.106149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.355 [2024-07-15 12:25:38.106170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.355 [2024-07-15 12:25:38.106765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.355 [2024-07-15 12:25:38.107000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.355 [2024-07-15 12:25:38.107009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.355 [2024-07-15 12:25:38.107016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.355 [2024-07-15 12:25:38.109641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.355 [2024-07-15 12:25:38.118590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.355 [2024-07-15 12:25:38.119042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.355 [2024-07-15 12:25:38.119084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.355 [2024-07-15 12:25:38.119106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.355 [2024-07-15 12:25:38.119695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.355 [2024-07-15 12:25:38.120189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.355 [2024-07-15 12:25:38.120199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.355 [2024-07-15 12:25:38.120205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.355 [2024-07-15 12:25:38.122824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.355 [2024-07-15 12:25:38.131379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.355 [2024-07-15 12:25:38.131840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.355 [2024-07-15 12:25:38.131880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.355 [2024-07-15 12:25:38.131901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.355 [2024-07-15 12:25:38.132494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.355 [2024-07-15 12:25:38.133076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.355 [2024-07-15 12:25:38.133101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.355 [2024-07-15 12:25:38.133123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.355 [2024-07-15 12:25:38.139357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.355 [2024-07-15 12:25:38.146351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.355 [2024-07-15 12:25:38.146882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.355 [2024-07-15 12:25:38.146903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.355 [2024-07-15 12:25:38.146913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.355 [2024-07-15 12:25:38.147166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.355 [2024-07-15 12:25:38.147427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.355 [2024-07-15 12:25:38.147441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.355 [2024-07-15 12:25:38.147450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.355 [2024-07-15 12:25:38.151506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.355 [2024-07-15 12:25:38.159335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.355 [2024-07-15 12:25:38.159770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.355 [2024-07-15 12:25:38.159812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.355 [2024-07-15 12:25:38.159834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.355 [2024-07-15 12:25:38.160289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.355 [2024-07-15 12:25:38.160464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.355 [2024-07-15 12:25:38.160474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.355 [2024-07-15 12:25:38.160481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.355 [2024-07-15 12:25:38.163223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.355 [2024-07-15 12:25:38.172127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.355 [2024-07-15 12:25:38.172430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.355 [2024-07-15 12:25:38.172446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.355 [2024-07-15 12:25:38.172453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.355 [2024-07-15 12:25:38.172615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.355 [2024-07-15 12:25:38.172781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.355 [2024-07-15 12:25:38.172790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.355 [2024-07-15 12:25:38.172796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.355 [2024-07-15 12:25:38.175483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.355 [2024-07-15 12:25:38.184959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.355 [2024-07-15 12:25:38.185400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.355 [2024-07-15 12:25:38.185442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.355 [2024-07-15 12:25:38.185464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.355 [2024-07-15 12:25:38.186046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.355 [2024-07-15 12:25:38.186211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.355 [2024-07-15 12:25:38.186220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.355 [2024-07-15 12:25:38.186234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.355 [2024-07-15 12:25:38.189027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.355 [2024-07-15 12:25:38.197936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.355 [2024-07-15 12:25:38.198363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.355 [2024-07-15 12:25:38.198404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.355 [2024-07-15 12:25:38.198427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.355 [2024-07-15 12:25:38.199004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.355 [2024-07-15 12:25:38.199415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.355 [2024-07-15 12:25:38.199424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.199430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.202106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.210849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.211288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.211305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.211312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.211474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.211637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.211646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.211652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.214342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.223754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.224171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.224212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.224248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.224827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.225404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.225413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.225419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.228017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.236662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.237113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.237155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.237176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.237769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.238362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.238389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.238409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.241037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.249587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.249969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.250011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.250032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.250509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.250674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.250683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.250690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.253289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.262763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.263237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.263287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.263310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.263885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.264467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.264477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.264483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.267140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.275639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.276087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.276130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.276150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.276580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.276754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.276764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.276771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.279413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.288520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.288882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.288899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.288905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.289067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.289238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.289247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.289269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.291877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.301349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.301784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.301801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.301808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.301970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.302137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.302147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.302153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.304860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.314336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.314783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.314815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.314838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.315424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.315814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.315831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.315845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.322077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.329321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.329781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.356 [2024-07-15 12:25:38.329802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.356 [2024-07-15 12:25:38.329812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.356 [2024-07-15 12:25:38.330065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.356 [2024-07-15 12:25:38.330328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.356 [2024-07-15 12:25:38.330342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.356 [2024-07-15 12:25:38.330351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.356 [2024-07-15 12:25:38.334401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.356 [2024-07-15 12:25:38.342404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.356 [2024-07-15 12:25:38.342858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.357 [2024-07-15 12:25:38.342901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.357 [2024-07-15 12:25:38.342921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.357 [2024-07-15 12:25:38.343459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.357 [2024-07-15 12:25:38.343633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.357 [2024-07-15 12:25:38.343643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.357 [2024-07-15 12:25:38.343650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.357 [2024-07-15 12:25:38.346369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.617 [2024-07-15 12:25:38.355423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.617 [2024-07-15 12:25:38.355853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.617 [2024-07-15 12:25:38.355901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.617 [2024-07-15 12:25:38.355923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.617 [2024-07-15 12:25:38.356445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.617 [2024-07-15 12:25:38.356619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.617 [2024-07-15 12:25:38.356627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.617 [2024-07-15 12:25:38.356633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.359381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.368242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.368609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.368650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.368672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.369261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.369810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.369820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.369826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.372464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.381039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.381445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.381488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.381511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.381959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.382123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.382133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.382139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.384826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.393996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.394440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.394484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.394513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.394910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.395076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.395086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.395092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.397774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.406893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.407329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.407373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.407396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.407870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.408034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.408043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.408050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.410738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.419705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.420144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.420186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.420208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.420666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.420840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.420848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.420855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.423492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.432598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.433048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.433090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.433110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.433657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.433830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.433843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.433849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.436480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.445545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.446002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.446019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.446026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.446199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.446400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.446410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.446417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.449411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.458590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.459042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.459060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.459068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.459252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.459430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.459440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.459448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.462276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.471787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.472165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.472183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.472190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.472371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.472549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.472559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.472566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.475394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.484901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.485330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.485347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.485355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.485531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.485709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.485719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.618 [2024-07-15 12:25:38.485726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.618 [2024-07-15 12:25:38.488556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.618 [2024-07-15 12:25:38.498070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.618 [2024-07-15 12:25:38.498505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.618 [2024-07-15 12:25:38.498523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.618 [2024-07-15 12:25:38.498531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.618 [2024-07-15 12:25:38.498708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.618 [2024-07-15 12:25:38.498887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.618 [2024-07-15 12:25:38.498896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.619 [2024-07-15 12:25:38.498903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.619 [2024-07-15 12:25:38.501739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1360785 Killed "${NVMF_APP[@]}" "$@" 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.619 [2024-07-15 12:25:38.511282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.619 [2024-07-15 12:25:38.511661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.619 [2024-07-15 12:25:38.511678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.619 [2024-07-15 12:25:38.511687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.619 [2024-07-15 12:25:38.511864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.619 [2024-07-15 12:25:38.512043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.619 [2024-07-15 12:25:38.512053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.619 [2024-07-15 12:25:38.512062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.619 [2024-07-15 12:25:38.514898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1361984 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1361984 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1361984 ']' 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:48.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:48.619 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.619 [2024-07-15 12:25:38.524430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.619 [2024-07-15 12:25:38.524851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.619 [2024-07-15 12:25:38.524869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.619 [2024-07-15 12:25:38.524876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.619 [2024-07-15 12:25:38.525054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.619 [2024-07-15 12:25:38.525239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.619 [2024-07-15 12:25:38.525248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.619 [2024-07-15 12:25:38.525255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.619 [2024-07-15 12:25:38.528086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.619 [2024-07-15 12:25:38.537629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.619 [2024-07-15 12:25:38.538058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.619 [2024-07-15 12:25:38.538075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.619 [2024-07-15 12:25:38.538083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.619 [2024-07-15 12:25:38.538267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.619 [2024-07-15 12:25:38.538447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.619 [2024-07-15 12:25:38.538458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.619 [2024-07-15 12:25:38.538465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.619 [2024-07-15 12:25:38.541310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.619 [2024-07-15 12:25:38.550831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.619 [2024-07-15 12:25:38.551267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.619 [2024-07-15 12:25:38.551286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.619 [2024-07-15 12:25:38.551294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.619 [2024-07-15 12:25:38.551474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.619 [2024-07-15 12:25:38.551659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.619 [2024-07-15 12:25:38.551669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.619 [2024-07-15 12:25:38.551676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.619 [2024-07-15 12:25:38.554491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.619 [2024-07-15 12:25:38.561532] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:35:48.619 [2024-07-15 12:25:38.561573] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:48.619 [2024-07-15 12:25:38.564041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.619 [2024-07-15 12:25:38.564359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.619 [2024-07-15 12:25:38.564377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.619 [2024-07-15 12:25:38.564386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.619 [2024-07-15 12:25:38.564563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.619 [2024-07-15 12:25:38.564743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.619 [2024-07-15 12:25:38.564753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.619 [2024-07-15 12:25:38.564761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.619 [2024-07-15 12:25:38.567592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.619 [2024-07-15 12:25:38.577054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.619 [2024-07-15 12:25:38.577386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.619 [2024-07-15 12:25:38.577404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.619 [2024-07-15 12:25:38.577411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.619 [2024-07-15 12:25:38.577595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.619 [2024-07-15 12:25:38.577769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.619 [2024-07-15 12:25:38.577779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.619 [2024-07-15 12:25:38.577785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.619 [2024-07-15 12:25:38.580676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.619 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.619 [2024-07-15 12:25:38.590188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.619 [2024-07-15 12:25:38.590581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.619 [2024-07-15 12:25:38.590598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.619 [2024-07-15 12:25:38.590605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.619 [2024-07-15 12:25:38.590778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.619 [2024-07-15 12:25:38.590957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.619 [2024-07-15 12:25:38.590967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.619 [2024-07-15 12:25:38.590974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.619 [2024-07-15 12:25:38.593830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.619 [2024-07-15 12:25:38.603341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.619 [2024-07-15 12:25:38.603700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.619 [2024-07-15 12:25:38.603718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.619 [2024-07-15 12:25:38.603726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.619 [2024-07-15 12:25:38.603903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.619 [2024-07-15 12:25:38.604083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.619 [2024-07-15 12:25:38.604093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.619 [2024-07-15 12:25:38.604100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.619 [2024-07-15 12:25:38.606934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.880 [2024-07-15 12:25:38.616456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.880 [2024-07-15 12:25:38.616831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.880 [2024-07-15 12:25:38.616849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.880 [2024-07-15 12:25:38.616857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.880 [2024-07-15 12:25:38.617034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.880 [2024-07-15 12:25:38.617213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.880 [2024-07-15 12:25:38.617223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.880 [2024-07-15 12:25:38.617237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.880 [2024-07-15 12:25:38.620080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.880 [2024-07-15 12:25:38.629524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.880 [2024-07-15 12:25:38.629972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.880 [2024-07-15 12:25:38.629990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.880 [2024-07-15 12:25:38.629997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.880 [2024-07-15 12:25:38.630169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.880 [2024-07-15 12:25:38.630348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.880 [2024-07-15 12:25:38.630358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.880 [2024-07-15 12:25:38.630364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.880 [2024-07-15 12:25:38.633180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.880 [2024-07-15 12:25:38.634949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:48.880 [2024-07-15 12:25:38.642589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.880 [2024-07-15 12:25:38.643021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.880 [2024-07-15 12:25:38.643040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.880 [2024-07-15 12:25:38.643048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.880 [2024-07-15 12:25:38.643222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.880 [2024-07-15 12:25:38.643401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.880 [2024-07-15 12:25:38.643411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.880 [2024-07-15 12:25:38.643418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.880 [2024-07-15 12:25:38.646200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.880 [2024-07-15 12:25:38.655745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.880 [2024-07-15 12:25:38.656183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.880 [2024-07-15 12:25:38.656203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.880 [2024-07-15 12:25:38.656211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.880 [2024-07-15 12:25:38.656388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.880 [2024-07-15 12:25:38.656561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.880 [2024-07-15 12:25:38.656571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.880 [2024-07-15 12:25:38.656578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.880 [2024-07-15 12:25:38.659403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.880 [2024-07-15 12:25:38.668779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.880 [2024-07-15 12:25:38.669238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.880 [2024-07-15 12:25:38.669258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.880 [2024-07-15 12:25:38.669266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.880 [2024-07-15 12:25:38.669439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.880 [2024-07-15 12:25:38.669613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.880 [2024-07-15 12:25:38.669623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.880 [2024-07-15 12:25:38.669630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.880 [2024-07-15 12:25:38.672468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.880 [2024-07-15 12:25:38.676122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.880 [2024-07-15 12:25:38.676151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.880 [2024-07-15 12:25:38.676163] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.880 [2024-07-15 12:25:38.676169] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.880 [2024-07-15 12:25:38.676174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.880 [2024-07-15 12:25:38.676219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:48.880 [2024-07-15 12:25:38.676330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.880 [2024-07-15 12:25:38.676331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:48.880 [2024-07-15 12:25:38.681950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.880 [2024-07-15 12:25:38.682355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.880 [2024-07-15 12:25:38.682376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.880 [2024-07-15 12:25:38.682386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.880 [2024-07-15 12:25:38.682571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.880 [2024-07-15 12:25:38.682747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.880 [2024-07-15 12:25:38.682756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.880 [2024-07-15 12:25:38.682764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.880 [2024-07-15 12:25:38.685598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.880 [2024-07-15 12:25:38.695114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.880 [2024-07-15 12:25:38.695505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.880 [2024-07-15 12:25:38.695527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.880 [2024-07-15 12:25:38.695537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.880 [2024-07-15 12:25:38.695718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.880 [2024-07-15 12:25:38.695898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.880 [2024-07-15 12:25:38.695908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.880 [2024-07-15 12:25:38.695916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.880 [2024-07-15 12:25:38.698744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.880 [2024-07-15 12:25:38.708270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.880 [2024-07-15 12:25:38.708725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.880 [2024-07-15 12:25:38.708748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.880 [2024-07-15 12:25:38.708756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.880 [2024-07-15 12:25:38.708935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.880 [2024-07-15 12:25:38.709116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.880 [2024-07-15 12:25:38.709127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.880 [2024-07-15 12:25:38.709141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.880 [2024-07-15 12:25:38.711972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.880 [2024-07-15 12:25:38.721316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 [2024-07-15 12:25:38.721643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.721665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.721673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.721853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.881 [2024-07-15 12:25:38.722033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.881 [2024-07-15 12:25:38.722043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.881 [2024-07-15 12:25:38.722051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.881 [2024-07-15 12:25:38.724885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.881 [2024-07-15 12:25:38.734406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 [2024-07-15 12:25:38.734761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.734782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.734790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.734970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.881 [2024-07-15 12:25:38.735151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.881 [2024-07-15 12:25:38.735160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.881 [2024-07-15 12:25:38.735169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.881 [2024-07-15 12:25:38.737997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.881 [2024-07-15 12:25:38.747508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 [2024-07-15 12:25:38.747847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.747866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.747874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.748052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.881 [2024-07-15 12:25:38.748235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.881 [2024-07-15 12:25:38.748246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.881 [2024-07-15 12:25:38.748253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.881 [2024-07-15 12:25:38.751077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.881 [2024-07-15 12:25:38.760593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 [2024-07-15 12:25:38.760973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.760995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.761002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.761180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.881 [2024-07-15 12:25:38.761363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.881 [2024-07-15 12:25:38.761374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.881 [2024-07-15 12:25:38.761380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:48.881 [2024-07-15 12:25:38.764205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.881 [2024-07-15 12:25:38.773715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 [2024-07-15 12:25:38.774150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.774167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.774175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.774358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.881 [2024-07-15 12:25:38.774539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.881 [2024-07-15 12:25:38.774549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.881 [2024-07-15 12:25:38.774558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.881 [2024-07-15 12:25:38.777382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.881 [2024-07-15 12:25:38.787085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 [2024-07-15 12:25:38.787406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.787424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.787431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.787608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.881 [2024-07-15 12:25:38.787788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.881 [2024-07-15 12:25:38.787798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.881 [2024-07-15 12:25:38.787804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.881 [2024-07-15 12:25:38.790634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.881 [2024-07-15 12:25:38.800148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.881 [2024-07-15 12:25:38.800466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.800486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.800494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.800671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.881 [2024-07-15 12:25:38.800852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.881 [2024-07-15 12:25:38.800862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.881 [2024-07-15 12:25:38.800869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.881 [2024-07-15 12:25:38.803698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.881 [2024-07-15 12:25:38.806067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.881 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.881 [2024-07-15 12:25:38.813222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 [2024-07-15 12:25:38.813604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.813622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.813629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.813806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.881 [2024-07-15 12:25:38.813985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.881 [2024-07-15 12:25:38.813995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.881 [2024-07-15 12:25:38.814001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.881 [2024-07-15 12:25:38.816829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.881 [2024-07-15 12:25:38.826364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 [2024-07-15 12:25:38.826796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.826814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.826822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.827000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.881 [2024-07-15 12:25:38.827179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.881 [2024-07-15 12:25:38.827189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.881 [2024-07-15 12:25:38.827200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.881 [2024-07-15 12:25:38.830034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.881 [2024-07-15 12:25:38.839554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.881 [2024-07-15 12:25:38.839948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.881 [2024-07-15 12:25:38.839967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.881 [2024-07-15 12:25:38.839974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.881 [2024-07-15 12:25:38.840152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.882 [2024-07-15 12:25:38.840337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.882 [2024-07-15 12:25:38.840348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.882 [2024-07-15 12:25:38.840355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.882 [2024-07-15 12:25:38.843180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.882 Malloc0 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.882 [2024-07-15 12:25:38.852704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.882 [2024-07-15 12:25:38.853079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-07-15 12:25:38.853096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.882 [2024-07-15 12:25:38.853104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.882 [2024-07-15 12:25:38.853285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.882 [2024-07-15 12:25:38.853463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.882 [2024-07-15 12:25:38.853473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.882 [2024-07-15 12:25:38.853479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.882 [2024-07-15 12:25:38.856305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.882 [2024-07-15 12:25:38.865834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.882 [2024-07-15 12:25:38.866216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.882 [2024-07-15 12:25:38.866238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24742d0 with addr=10.0.0.2, port=4420 00:35:48.882 [2024-07-15 12:25:38.866250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24742d0 is same with the state(5) to be set 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.882 [2024-07-15 12:25:38.866428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24742d0 (9): Bad file descriptor 00:35:48.882 [2024-07-15 12:25:38.866609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:48.882 [2024-07-15 12:25:38.866619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:48.882 [2024-07-15 12:25:38.866626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:48.882 [2024-07-15 12:25:38.868769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.882 [2024-07-15 12:25:38.869451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.882 12:25:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1361044 00:35:49.140 [2024-07-15 12:25:38.878958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:49.140 [2024-07-15 12:25:39.042233] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:59.117 00:35:59.117 Latency(us) 00:35:59.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.117 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:59.118 Verification LBA range: start 0x0 length 0x4000 00:35:59.118 Nvme1n1 : 15.01 8354.67 32.64 11078.83 0.00 6566.43 651.80 14189.97 00:35:59.118 =================================================================================================================== 00:35:59.118 Total : 8354.67 32.64 11078.83 0.00 6566.43 651.80 14189.97 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:59.118 rmmod nvme_tcp 00:35:59.118 rmmod nvme_fabrics 00:35:59.118 rmmod nvme_keyring 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1361984 ']' 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1361984 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1361984 ']' 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1361984 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1361984 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1361984' 00:35:59.118 killing process with pid 1361984 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1361984 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1361984 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:59.118 12:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.022 12:25:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:01.022 00:36:01.022 real 0m25.739s 00:36:01.022 user 1m0.647s 00:36:01.022 sys 0m6.477s 00:36:01.022 12:25:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:01.022 12:25:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:01.022 ************************************ 00:36:01.022 END TEST nvmf_bdevperf 00:36:01.022 ************************************ 00:36:01.022 12:25:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:01.022 12:25:50 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:01.022 12:25:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:01.022 12:25:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:01.022 12:25:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:01.022 ************************************ 00:36:01.022 START TEST nvmf_target_disconnect 00:36:01.022 ************************************ 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:01.022 * Looking for test storage... 00:36:01.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.022 12:25:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:36:01.023 12:25:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:06.365 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:06.365 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:06.365 Found net devices under 0000:86:00.0: cvl_0_0 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:06.365 Found net devices under 0000:86:00.1: cvl_0_1 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:06.365 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:06.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:06.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:36:06.625 00:36:06.625 --- 10.0.0.2 ping statistics --- 00:36:06.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.625 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:06.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:06.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:36:06.625 00:36:06.625 --- 10.0.0.1 ping statistics --- 00:36:06.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:06.625 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:06.625 ************************************ 00:36:06.625 START TEST nvmf_target_disconnect_tc1 00:36:06.625 ************************************ 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:06.625 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.625 [2024-07-15 12:25:56.595230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.625 [2024-07-15 12:25:56.595267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17afab0 with addr=10.0.0.2, port=4420 00:36:06.625 [2024-07-15 12:25:56.595286] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:06.625 [2024-07-15 12:25:56.595294] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:06.625 [2024-07-15 12:25:56.595300] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:36:06.625 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:06.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:06.625 Initializing NVMe Controllers 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:06.625 00:36:06.625 real 0m0.110s 00:36:06.625 user 0m0.044s 00:36:06.625 sys 0m0.066s 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:06.625 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:06.625 ************************************ 00:36:06.625 END TEST nvmf_target_disconnect_tc1 00:36:06.625 ************************************ 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:06.885 ************************************ 00:36:06.885 START TEST nvmf_target_disconnect_tc2 00:36:06.885 ************************************ 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1367131 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1367131 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1367131 ']' 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:06.885 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.885 [2024-07-15 12:25:56.722830] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:36:06.885 [2024-07-15 12:25:56.722868] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.885 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.885 [2024-07-15 12:25:56.792523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:06.885 [2024-07-15 12:25:56.833255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.885 [2024-07-15 12:25:56.833294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.885 [2024-07-15 12:25:56.833301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.885 [2024-07-15 12:25:56.833307] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.885 [2024-07-15 12:25:56.833312] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.885 [2024-07-15 12:25:56.833428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:06.885 [2024-07-15 12:25:56.833557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:06.885 [2024-07-15 12:25:56.833662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:06.885 [2024-07-15 12:25:56.833664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.145 Malloc0 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.145 12:25:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.145 [2024-07-15 12:25:56.999212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.145 [2024-07-15 12:25:57.031443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1367161 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:07.145 12:25:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:07.145 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.714 12:25:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1367131 00:36:09.714 12:25:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 [2024-07-15 12:25:59.058735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Read completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.714 Write completed with error (sct=0, sc=8) 00:36:09.714 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 [2024-07-15 12:25:59.058941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 [2024-07-15 12:25:59.059131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Write completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 Read completed with error (sct=0, sc=8) 00:36:09.715 starting I/O failed 00:36:09.715 [2024-07-15 12:25:59.059332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.715 [2024-07-15 12:25:59.059448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.059465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.059662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.059692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.059908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.059940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.060362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.060399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.060572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.060604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.060753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.060785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.060935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.060968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.061177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.061208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.061379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.061412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.061558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.061590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.061807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.061840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.061990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.062023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.062221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.062265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.062354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.715 [2024-07-15 12:25:59.062365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.715 qpair failed and we were unable to recover it. 00:36:09.715 [2024-07-15 12:25:59.062529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.062540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.063295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.063322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.063462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.063512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.063744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.063778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.065262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.065319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.065570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.065603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.065845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.065877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.066016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.066047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.066265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.066297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.066515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.066546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.066702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.066735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.066935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.066965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.067131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.067164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.067305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.067337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.067531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.067561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.067688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.067719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.067850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.067881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.068026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.068057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.068191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.068233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.068367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.068398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.068543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.068575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.068695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.068726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.068961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.068993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.069150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.069183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.069326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.069358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.069506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.069537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.069679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.069711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.069870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.069900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.070047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.070078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.071502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.071554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.071885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.071918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.072050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.072082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.072291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.072324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.072513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.072545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.072688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.072719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.072872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.072904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.073037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.073069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.073240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.073272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.073402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.073434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.716 [2024-07-15 12:25:59.073625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.716 [2024-07-15 12:25:59.073656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.716 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.073793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.073823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.073959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.073990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.074129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.074160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.074328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.074358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.074837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.074872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.075022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.075052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.075167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.075197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.075334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.075364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.075549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.075578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.075803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.075834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.075948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.075976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.076106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.076134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.076260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.076289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.076428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.076456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.076582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.076610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.076712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.076741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.076926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.076954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.077085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.077113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.077262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.077291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.077500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.077528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.077645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.077673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.077798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.077826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.077960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.078003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.078138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.078175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.078345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.078386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.078531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.078572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.078728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.078770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.078897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.078932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.079139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.079173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.079334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.079375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.079530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.079563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.079690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.079718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.079840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.079868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.079985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.080013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.080198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.080241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.080391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.080419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.080552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.080581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.080703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.080732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.080855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.080883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.717 qpair failed and we were unable to recover it. 00:36:09.717 [2024-07-15 12:25:59.080998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.717 [2024-07-15 12:25:59.081027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.081223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.081272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.081384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.081412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.081598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.081627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.081741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.081769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.081892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.081920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.082049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.082077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.082259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.082290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.082411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.082439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.082542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.082570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.082693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.082725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.082913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.082942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.083084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.083112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.083300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.083330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.083513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.083541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.083656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.083685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.083871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.083899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.084032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.084060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.084177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.084205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.084440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.084470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.084588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.084617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.084801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.084828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.085023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.085052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.085245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.085275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.085406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.085433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.085536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.085565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.085745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.085773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.085968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.085996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.086118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.086146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.086268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.086299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.086477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.086505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.086627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.086655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.086768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.086797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.086982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.087010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.087329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.087359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.087563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.087592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.718 [2024-07-15 12:25:59.087773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.718 [2024-07-15 12:25:59.087801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.718 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.087939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.087967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.088220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.088260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.088376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.088405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.088591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.088620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.088756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.088784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.088905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.088933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.089218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.089256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.089480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.089508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.089695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.089723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.089917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.089945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.090071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.090099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.090247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.090277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.090487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.090515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.090711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.090739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.090949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.090978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.091107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.091135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.091318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.091348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.091478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.091506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.091626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.091655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.091756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.091784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.091914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.091943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.092137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.092165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.092391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.092426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.092620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.092649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.092778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.092806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.092930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.092958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.093242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.093272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.093450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.093479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.093598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.093626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.093751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.093780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.093958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.093987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.094176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.094204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.094403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.094433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.094562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.094589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.094724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.094751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.094940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.094969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.719 qpair failed and we were unable to recover it. 00:36:09.719 [2024-07-15 12:25:59.095164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.719 [2024-07-15 12:25:59.095196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.095338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.095371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.095507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.095538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.095814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.095845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.096035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.096066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.096323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.096361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.096500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.096530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.096719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.096749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.096891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.096922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.097124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.097154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.097308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.097340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.097532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.097562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.097704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.097735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.097868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.097899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.098017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.098048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.098180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.098210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.098423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.098455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.098642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.098672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.098864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.098895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.099034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.099065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.099257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.099288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.099432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.099464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.099576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.099606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.099737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.099767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.099888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.099919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.100038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.100069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.100205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.100259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.100398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.100430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.100577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.100608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.100737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.100768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.100888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.100919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.101195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.101237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.101362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.101398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.101585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.101616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.101772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.101803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.102062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.102092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.102274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.102306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.102497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.102528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.102667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.102698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.102827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.102858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.720 [2024-07-15 12:25:59.103141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.720 [2024-07-15 12:25:59.103171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.720 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.103330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.103361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.103487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.103517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.103649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.103680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.103812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.103843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.104057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.104088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.104215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.104257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.104451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.104481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.104759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.104790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.104935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.104966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.105166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.105196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.105456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.105488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.105629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.105659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.105857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.105888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.106034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.106065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.106184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.106214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.106361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.106392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.106592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.106623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.106811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.106841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.107053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.107089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.107241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.107275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.107466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.107497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.107662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.107692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.107812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.107843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.108101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.108132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.108364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.108409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.108626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.108657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.108853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.108883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.109075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.109107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.109312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.109343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.109536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.109567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.109779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.109810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.110018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.110049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.110189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.110222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.110367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.110398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.110670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.110701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.110981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.111011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.111145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.111176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.111384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.111416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.111695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.111724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.111851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.111881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.721 qpair failed and we were unable to recover it. 00:36:09.721 [2024-07-15 12:25:59.112136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.721 [2024-07-15 12:25:59.112167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.112394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.112425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.112631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.112662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.112788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.112819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.113008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.113039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.113295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.113328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.113538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.113568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.113757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.113787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.113931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.113961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.114083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.114113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.114300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.114331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.114520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.114550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.114748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.114779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.114985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.115016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.115293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.115324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.115580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.115611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.115867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.115897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.116165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.116196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.116350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.116381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.116520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.116552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.116688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.116719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.116852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.116883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.117078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.117109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.117244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.117275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.117435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.117465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.117688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.117718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.117857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.117887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.118097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.118129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.118253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.118284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.118434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.118465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.118656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.118687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.118947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.118977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.119257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.119288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.119508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.119538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.119683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.119714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.119918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.119949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.120141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.120172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.120363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.120395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.120648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.120678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.120868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.120899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.722 [2024-07-15 12:25:59.121032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.722 [2024-07-15 12:25:59.121063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.722 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.121216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.121275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.121463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.121494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.121682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.121712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.121909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.121939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.122131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.122161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.122347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.122389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.122530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.122562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.122683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.122713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.122906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.122937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.123127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.123158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.123305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.123337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.123486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.123517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.123749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.123779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.124061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.124092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.124323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.124354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.124633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.124663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.124947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.124977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.125193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.125231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.125425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.125455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.125657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.125688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.125961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.125992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.126128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.126160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.126310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.126342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.126593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.126624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.126901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.126932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.127122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.127153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.127361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.127394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.127586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.127616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.127740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.127771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.128001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.128031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.128217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.128256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.128408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.128438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.128623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.128658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.128860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.128891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.723 [2024-07-15 12:25:59.129067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.723 [2024-07-15 12:25:59.129098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.723 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.129348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.129379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.129682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.129711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.129963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.129993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.130193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.130234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.130441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.130472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.130679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.130710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.130841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.130871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.131008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.131039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.131327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.131358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.131507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.131539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.131740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.131770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.132031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.132063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.132262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.132293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.132546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.132576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.132841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.132872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.133061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.133092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.133350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.133381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.133611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.133641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.133812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.133843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.134050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.134080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.134345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.134377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.134630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.134661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.134916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.134947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.135097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.135128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.135425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.135457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.135598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.135629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.135825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.135856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.136073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.136104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.136304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.136340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.136559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.136591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.136702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.136733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.136876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.136907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.137133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.137164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.137305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.137337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.137540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.137570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.137699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.137730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.137928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.137958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.138170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.138201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.138475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.138509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.138702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.724 [2024-07-15 12:25:59.138733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.724 qpair failed and we were unable to recover it. 00:36:09.724 [2024-07-15 12:25:59.138923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.138954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.139098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.139129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.139318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.139349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.139541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.139571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.139703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.139734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.139871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.139902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.140109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.140140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.140357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.140388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.140642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.140673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.140864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.140894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.141032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.141063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.141247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.141278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.141480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.141512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.141640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.141671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.141861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.141892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.142034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.142065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.142207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.142246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.142432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.142464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.142667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.142698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.142825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.142856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.143040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.143071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.143210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.143251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.143508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.143538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.143682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.143713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.143849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.143879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.144028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.144064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.144263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.144295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.144469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.144500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.144654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.144685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.144872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.144902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.145094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.145126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.145262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.145294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.145569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.145600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.145739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.145770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.146052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.146082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.146273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.146304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.146445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.146475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.146616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.146648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.146847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.146877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.147082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.147112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.725 [2024-07-15 12:25:59.147310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.725 [2024-07-15 12:25:59.147342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.725 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.147607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.147638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.147809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.147840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.147969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.147999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.148280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.148312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.148469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.148499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.148641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.148672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.148868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.148899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.149023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.149054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.149308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.149340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.149490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.149520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.149648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.149678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.149868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.149904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.150129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.150160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.150352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.150383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.150525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.150556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.150745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.150776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.151056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.151087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.151343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.151374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.151522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.151553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.151804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.151835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.152025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.152055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.152259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.152291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.152480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.152510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.152717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.152748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.152963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.152994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.153187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.153218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.153516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.153547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.153799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.153829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.154093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.154123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.154333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.154364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.154529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.154559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.154772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.154803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.154915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.154946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.155146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.155177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.155477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.155509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.155652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.155683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.155939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.155969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.156159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.156190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.156319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.156357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.156550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.156581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.726 [2024-07-15 12:25:59.156715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.726 [2024-07-15 12:25:59.156746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.726 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.156998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.157030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.157167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.157197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.157337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.157369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.157671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.157701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.157840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.157870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.158076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.158106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.158313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.158345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.158601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.158635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.158828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.158859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.158979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.159009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.159237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.159269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.159625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.159706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.159951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.159994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.160310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.160351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.160493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.160524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.160722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.160753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.160954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.160988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.161189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.161219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.161442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.161490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.161799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.161842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.162065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.162109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.162337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.162371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.162596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.162627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.162846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.162877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.163009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.163046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.163252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.163287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.163492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.163522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.163749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.163795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.164038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.164083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.164238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.164274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.164556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.164587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.164773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.164803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.164941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.164972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.165182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.727 [2024-07-15 12:25:59.165212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.727 qpair failed and we were unable to recover it. 00:36:09.727 [2024-07-15 12:25:59.165444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.165474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.165743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.165774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.166056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.166085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.166370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.166403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.166633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.166664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.166918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.166948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.167206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.167246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.167404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.167435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.167653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.167684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.167826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.167857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.168116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.168147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.168358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.168389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.168515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.168545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.168766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.168797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.169052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.169081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.169217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.169256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.169510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.169540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.169864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.169899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.170038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.170069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.170258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.170291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.170547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.170579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.170848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.170878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.171083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.171113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.171301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.171333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.171596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.171626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.171825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.171856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.172002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.172033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.172310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.172341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.172471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.172502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.172699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.172730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.172938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.172968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.173260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.173292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.173481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.173512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.173796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.173827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.173977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.174008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.174146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.174177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.174425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.174458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.174722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.174752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.174901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.174933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.175082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.175112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.728 qpair failed and we were unable to recover it. 00:36:09.728 [2024-07-15 12:25:59.175313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.728 [2024-07-15 12:25:59.175346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.175600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.175630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.175881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.175913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.176156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.176187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.176475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.176507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.176729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.176761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.176999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.177030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.177185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.177216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.177504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.177535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.177672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.177702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.177904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.177935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.178071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.178102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.178314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.178346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.178497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.178527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.178729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.178760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.178954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.178984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.179184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.179215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.179478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.179509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.179701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.179732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.179867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.179898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.180203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.180243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.180434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.180465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.180659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.180690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.180886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.180916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.181171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.181202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.181378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.181410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.181702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.181733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.181945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.181976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.182121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.182152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.182361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.182393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.182535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.182566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.182837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.182867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.183021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.183053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.183252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.183283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.183480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.183511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.183700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.183731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.183985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.184015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.184294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.184326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.184541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.184572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.184762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.184793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.184928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.184959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.729 [2024-07-15 12:25:59.185165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.729 [2024-07-15 12:25:59.185196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.729 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.185451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.185510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.185724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.185757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.185901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.185933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.186057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.186095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.186384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.186421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.186710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.186742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.187002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.187052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.187357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.187403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.187578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.187621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.187785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.187820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.188102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.188134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.188413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.188450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.188658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.188689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.188905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.188948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.189184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.189238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.189458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.189492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.189640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.189670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.189951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.189981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.190183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.190214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.190442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.190473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.190667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.190697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.190944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.190974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.191117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.191148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.191364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.191395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.191605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.191635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.191889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.191919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.192121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.192152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.192334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.192382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.192525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.192555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.192741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.192772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.193054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.193085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.193296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.193328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.193546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.193576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.193730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.193760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.193960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.193991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.194189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.194220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.194375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.194406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.194598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.194628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.194768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.194798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.195078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.195110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.730 qpair failed and we were unable to recover it. 00:36:09.730 [2024-07-15 12:25:59.195365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.730 [2024-07-15 12:25:59.195396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.195590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.195620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.195753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.195784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.196049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.196080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.196296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.196328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.196545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.196576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.196725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.196756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.197036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.197067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.197345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.197376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.197586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.197617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.197817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.197847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.198045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.198076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.198267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.198299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.198494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.198525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.198714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.198745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.198953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.198983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.199129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.199160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.199347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.199383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.199589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.199620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.199812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.199843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.199992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.200022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.200247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.200279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.200470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.200501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.200697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.200727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.200867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.200898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.201102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.201133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.201330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.201362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.201548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.201578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.201849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.201879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.202116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.202146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.202345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.202376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.202633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.202665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.202871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.202902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.203159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.203190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.203479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.203513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.203653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.203683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.203868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.203899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.204084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.204114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.204319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.204352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.204494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.204525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.204804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.731 [2024-07-15 12:25:59.204835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.731 qpair failed and we were unable to recover it. 00:36:09.731 [2024-07-15 12:25:59.205034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.205065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.205274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.205304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.205437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.205468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.205601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.205637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.205868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.205899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.206087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.206118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.206337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.206368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.206575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.206606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.206744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.206775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.207045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.207075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.207273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.207305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.207495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.207526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.207727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.207758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.207996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.208026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.208152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.208183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.208332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.208364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.208589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.208623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.208830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.208861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.209049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.209080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.209268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.209300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.209506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.209537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.209741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.209771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.209962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.209993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.210125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.210156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.210430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.210461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.210674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.210704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.210837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.210868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.211026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.211056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.211282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.211314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.211504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.211535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.211669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.211704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.211860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.211891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.732 [2024-07-15 12:25:59.212088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.732 [2024-07-15 12:25:59.212119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.732 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.212253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.212285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.212542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.212572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.212716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.212747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.212895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.212925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.213068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.213099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.213255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.213287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.213543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.213574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.213711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.213742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.213933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.213964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.214092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.214122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.214275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.214307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.214635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.214706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.214874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.214908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.215119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.215151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.215351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.215385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.215576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.215607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.215845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.215876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.216033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.216063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.216369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.216400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.216680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.216711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.216900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.216931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.217125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.217156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.217342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.217373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.217559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.217590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.217792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.217831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.217951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.217982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.218116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.218147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.218289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.218322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.218525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.218555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.218815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.218846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.219038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.219069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.219287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.219319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.219528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.219559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.219780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.219811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.219953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.219984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.733 qpair failed and we were unable to recover it. 00:36:09.733 [2024-07-15 12:25:59.220188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.733 [2024-07-15 12:25:59.220219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.220487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.220519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.220715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.220745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.220878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.220909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.221171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.221202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.221429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.221460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.221754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.221785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.221990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.222021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.222217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.222255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.222511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.222542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.222796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.222826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.222983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.223014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.223234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.223267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.223471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.223501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.223655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.223686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.223835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.223866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.224080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.224133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.224312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.224345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.224481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.224508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.224718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.224744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.224867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.224896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.225032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.225059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.225244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.225273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.225498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.225540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.225767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.225801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.226093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.226127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.226280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.226308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.226556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.734 [2024-07-15 12:25:59.226582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.734 qpair failed and we were unable to recover it. 00:36:09.734 [2024-07-15 12:25:59.226722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.226751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.226977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.227009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.227199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.227253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.227481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.227524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.227832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.227878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.228107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.228331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.228366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.228576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.228613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.228810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.228841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.229069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.229111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.229393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.229440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.229733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.229779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.230068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.230101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.230243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.230276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.230483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.230519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.230761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.230793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.231023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.231067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.231375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.231421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.231718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.231763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.231994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.232029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.232157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.232189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.232435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.232468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.232747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.232778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.232970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.233002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.233120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.233151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.233361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.233406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.233721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.233766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.233948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.233992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.234291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.234329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.234604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.234636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.234842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.234876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.235153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.235185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.235482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.235533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.235716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.235755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.236051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.236090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.236304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.236340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.236631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.236663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.236798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.236832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.236974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.237005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.237136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.735 [2024-07-15 12:25:59.237167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.735 qpair failed and we were unable to recover it. 00:36:09.735 [2024-07-15 12:25:59.237362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.237410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.237625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.237671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.237840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.237885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.238109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.238143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.238341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.238375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.238519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.238551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.238683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.238716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.238995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.239026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.239316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.239367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.239613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.239653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.239826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.239870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.240148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.240181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.240383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.240417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.240669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.240700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.240898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.240929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.241177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.241211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.241432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.241464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.241687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.241736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.242051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.242097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.242396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.242435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.242711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.242743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.242894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.242925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.243135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.243169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.243381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.243414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.243611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.243659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.243893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.243933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.244083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.244126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.244429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.244465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.244754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.244786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.245015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.245050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.245309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.245343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.245484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.245528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.245708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.245748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.246052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.736 [2024-07-15 12:25:59.246101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.736 qpair failed and we were unable to recover it. 00:36:09.736 [2024-07-15 12:25:59.246306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.246339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.246481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.246512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.246781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.246815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.246954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.246985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.247186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.247217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.247449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.247496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.247780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.247824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.248077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.248121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.248343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.248376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.248655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.248685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.248918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.248950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.249132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.249166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.249456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.249489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.249640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.249683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.249923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.249971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.250196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.250249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.250533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.250571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.250726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.250757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.250980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.251010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.251267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.251303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.251509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.251540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.251691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.251736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.251898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.251940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.252175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.252218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.252526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.252563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.252775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.252808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.253011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.253042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.253248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.253283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.253566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.253597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.253869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.253917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.254152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.254194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.254389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.254430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.254647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.254679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.254870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.254902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.255163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.255198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.255480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.255512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.255740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.255787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.255939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.255982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.256197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.256253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.256493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.256531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.737 [2024-07-15 12:25:59.256792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.737 [2024-07-15 12:25:59.256824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.737 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.256954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.256985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.257190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.257221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.257376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.257408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.257688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.257722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.257857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.257889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.258094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.258139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.258442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.258496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.258695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.258739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.258987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.259024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.259219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.259300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.259451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.259483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.259630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.259664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.259921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.259953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.260147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.260191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.260384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.260429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.260586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.260625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.260884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.260926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.261129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.261162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.261373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.261407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.261669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.261704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.261906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.261938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.262204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.262265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.262437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.262477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.262695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.262740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.262939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.262974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.263195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.263239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.263433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.263464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.263623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.263656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.263913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.263944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.264141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.264188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.264449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.264492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.264645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.264690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.264922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.264958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.265106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.265143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.265257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.265291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.265419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.265451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.265709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.265739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.265929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.265967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.266107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.266139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.266364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.738 [2024-07-15 12:25:59.266398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.738 qpair failed and we were unable to recover it. 00:36:09.738 [2024-07-15 12:25:59.266632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.266682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.266912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.266952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.267188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.267271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.267492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.267523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.267731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.267762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.267905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.267939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.268242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.268276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.268499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.268548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.268714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.268753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.269000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.269045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.269259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.269295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.269600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.269632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.269775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.269809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.270066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.270098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.270303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.270348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.270571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.270613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.270778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.270819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.271118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.271155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.271307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.271341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.271545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.271576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.271810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.271844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.272047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.272079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.272297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.272343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.272588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.272631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.272915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.272962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.273194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.273243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.273449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.273481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.273668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.273699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.273953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.273983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.274199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.274259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.274418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.274449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.274712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.274759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.274990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.275032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.275290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.275345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.275507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.275544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.275823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.275854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.276007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.276039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.276169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.276203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.276424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.276457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.276678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.739 [2024-07-15 12:25:59.276722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.739 qpair failed and we were unable to recover it. 00:36:09.739 [2024-07-15 12:25:59.276969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.277011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.277246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.277291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.277521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.277557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.277757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.277789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.277979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.278011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.278257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.278291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.278487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.278518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.278750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.278799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.279019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.279058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.279295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.279338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.279540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.279572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.279694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.279725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.280003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.280037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.280245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.280278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.280412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.280445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.280674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.280723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.281031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.281076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.281312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.281350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.281504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.281536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.281752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.281783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.282001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.282033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.282181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.282212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.282361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.282395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.282587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.282620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.282740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.282771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.282936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.282983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.283315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.283362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.283617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.283663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.283867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.283899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.284185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.284217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.284425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.284460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.284591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.284623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.284834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.284877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.285055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.285109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.285321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.285361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.285521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.285566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.285785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.285820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.286048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.286080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.286218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.286265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.286436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.740 [2024-07-15 12:25:59.286468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.740 qpair failed and we were unable to recover it. 00:36:09.740 [2024-07-15 12:25:59.286667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.286699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.286931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.286979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.287199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.287253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.287483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.287531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.287730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.287761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.287948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.287979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.288186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.288221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.288483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.288516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.288721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.288765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.288944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.288991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.289290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.289337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.289554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.289591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.289850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.289883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.290015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.290046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.290220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.290267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.290466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.290498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.290717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.290751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.290877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.290910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.291110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.291154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.291473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.291519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.291757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.291803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.291984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.292020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.292290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.292324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.292515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.292546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.292808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.292841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.292990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.293022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.293154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.293198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.293501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.293546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.293866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.293905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.294118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.294150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.294348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.294382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.294602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.294635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.294748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.294781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.295040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.295094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.295333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.295375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.741 [2024-07-15 12:25:59.295662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.741 [2024-07-15 12:25:59.295703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.741 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.295841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.295873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.296093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.296125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.296321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.296381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.296547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.296579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.296712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.296744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.296962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.297006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.297248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.297293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.297448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.297489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.297633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.297678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.297887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.297921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.298036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.298067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.298302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.298337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.298541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.298573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.298804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.298838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.299108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.299140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.299293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.299338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.299621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.299669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.299904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.299949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.300183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.300220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.300431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.300464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.300595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.300625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.300817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.300851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.301131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.301163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.301316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.301362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.301574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.301616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.301852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.301897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.302125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.302161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.302363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.302397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.302652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.302687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.302893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.302924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.303132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.303174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.303402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.303447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.303673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.303717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.303883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.303928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.304139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.304171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.304413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.304448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.304578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.304618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.304761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.304799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.304939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.304969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.305168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.305213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.742 qpair failed and we were unable to recover it. 00:36:09.742 [2024-07-15 12:25:59.305452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.742 [2024-07-15 12:25:59.305497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.305735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.305779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.306103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.306140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.306354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.306386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.306576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.306608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.306820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.306852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.307133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.307167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.307312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.307345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.307549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.307587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.307746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.307792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.308009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.308052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.308316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.308361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.308521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.308558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.308792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.308823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.309067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.309100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.309279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.309314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.309521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.309564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.309784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.309829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.310057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.310098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.310331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.310370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.310577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.310609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.310797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.310827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.310962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.311005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.311137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.311168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.311321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.311354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.311624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.311672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.311843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.311882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.312040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.312083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.312407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.312445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.312648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.312679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.743 qpair failed and we were unable to recover it. 00:36:09.743 [2024-07-15 12:25:59.312831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.743 [2024-07-15 12:25:59.312862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.313148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.313180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.313366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.313398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.313594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.313640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.313809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.313850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.314000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.314044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.314340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.314379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.314497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.314535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.314759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.314790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.314928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.314959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.315109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.315139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.315344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.315379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.315562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.315593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.315802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.315845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.316007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.316052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.316213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.316266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.316519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.316563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.316763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.316798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.316947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.316979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.317165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.317197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.317447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.317483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.317621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.317653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.317898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.317943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.318275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.318319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.318646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.318683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.318820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.744 [2024-07-15 12:25:59.318850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.744 qpair failed and we were unable to recover it. 00:36:09.744 [2024-07-15 12:25:59.319038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.319069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.319205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.319252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.319536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.319566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.319848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.319897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.320114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.320152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.320321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.320370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.320537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.320571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.320830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.320863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.321011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.321052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.321177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.321208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.321377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.321409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.321638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.321683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.321920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.321964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.322256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.322302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.322557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.322594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.322803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.322835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.323109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.323140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.323264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.323296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.323455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.323492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.323762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.323794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.323921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.323952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.324147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.324201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.324377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.324425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.324681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.324720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.324973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.325010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.325207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.325252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.325406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.745 [2024-07-15 12:25:59.325437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.745 qpair failed and we were unable to recover it. 00:36:09.745 [2024-07-15 12:25:59.325741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.325774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.325942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.325974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.326137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.326180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.326366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.326414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.326720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.326763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.326995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.327031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.327287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.327320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.327455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.327486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.327629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.327663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.327814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.327846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.327995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.328025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.328243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.328293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.328552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.328591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.328898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.328936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.329067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.329099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.329282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.329317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.329553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.329587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.329717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.329748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.329870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.329900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.330135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.746 [2024-07-15 12:25:59.330182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.746 qpair failed and we were unable to recover it. 00:36:09.746 [2024-07-15 12:25:59.330365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.330406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.330664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.330707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.330921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.330955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.331153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.331184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.331346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.331378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.331611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.331642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.331918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.331952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.332092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.332123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.332355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.332395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.332555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.332600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.332767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.332810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.332958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.332999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.333243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.333283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.333423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.333455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.333584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.333622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.333748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.333779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.333976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.334010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.334160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.334191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.334369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.334413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.334631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.334674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.334915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.334959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.335127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.335163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.335405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.335439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.335572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.335604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.335734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.335771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.747 [2024-07-15 12:25:59.335905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.747 [2024-07-15 12:25:59.335937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.747 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.336143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.336174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.336383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.336430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.336659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.336700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.336934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.336978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.337123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.337155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.337439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.337473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.337670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.337703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.337922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.337953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.338146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.338188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.338419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.338468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.338649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.338688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.338937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.338983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.339209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.339255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.339377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.339407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.339543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.339574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.339708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.339739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.339878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.339909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.340111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.340145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.340335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.340369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.340500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.340531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.340723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.340771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.341022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.341063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.341239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.341284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.341534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.341571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.341829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.341860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.748 [2024-07-15 12:25:59.342122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.748 [2024-07-15 12:25:59.342155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.748 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.342289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.342323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.342536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.342567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.342778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.342835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.343060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.343098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.343322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.343369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.343662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.343694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.343894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.343925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.344060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.344092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.344249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.344282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.344415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.344446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.344670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.344718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.344869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.344907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.345141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.345186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.345524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.345558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.345754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.345785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.345998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.346033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.346250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.346283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.346482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.346528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.346811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.346855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.347083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.347128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.347346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.347380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.347586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.347616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.347814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.347844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.347970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.348002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.348152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.348182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.749 [2024-07-15 12:25:59.348322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.749 [2024-07-15 12:25:59.348357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.749 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.348513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.348544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.348748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.348780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.348929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.348972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.349151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.349197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.349549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.349595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.349755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.349791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.349930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.349961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.350169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.350200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.350364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.350402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.350631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.350662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.350773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.350804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.350925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.350968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.351189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.351249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.351536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.351580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.351811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.351844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.352102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.352132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.352266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.352313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.352579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.352611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.352817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.352859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.353054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.353100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.353264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.353303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.353527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.353572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.353846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.353879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.354084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.354114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.354263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.354297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.354439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.354471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.750 [2024-07-15 12:25:59.354581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.750 [2024-07-15 12:25:59.354612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.750 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.354805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.354849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.355071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.355114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.355349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.355395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.355625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.355662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.355784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.355816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.355963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.355994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.356215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.356266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.356471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.356504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.356660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.356699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.356928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.356965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.357208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.357292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.357525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.357573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.357732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.357770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.357920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.357962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.358291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.358333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.358485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.358517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.358648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.358679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.358893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.358929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.359128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.359161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.359428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.359478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.359712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.359752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.359919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.359963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.360180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.360216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.360439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.360472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.360665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.360706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.360859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.751 [2024-07-15 12:25:59.360889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.751 qpair failed and we were unable to recover it. 00:36:09.751 [2024-07-15 12:25:59.361112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.361143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.361351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.361399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.361641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.361681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.361845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.361898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.362055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.362089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.362221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.362288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.362419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.362450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.362640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.362675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.362867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.362900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.363033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.363064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.363312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.363363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.363655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.363696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.363912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.363950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.364178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.364210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.364451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.364483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.364742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.364773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.364931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.364968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.365187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.365220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.365455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.365489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.365727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.365774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.365947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.365987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.366220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.366280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.366522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.366555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.366755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.366787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.367084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.367118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.367262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.367297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.367443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.367486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.367647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.367693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.367907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.367945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.368164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.368203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.368496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.368565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.368716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.368750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.368953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.752 [2024-07-15 12:25:59.368985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.752 qpair failed and we were unable to recover it. 00:36:09.752 [2024-07-15 12:25:59.369266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.369299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.369487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.369518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.369714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.369744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.370019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.370050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.370171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.370201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.370479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.370511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.370704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.370735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.371020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.371050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.371252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.371287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.371521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.371553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.371711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.371750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.371941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.371971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.372111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.372141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.372375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.372408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.372605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.372636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.372844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.372875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.373086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.373118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.373276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.373308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.373455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.373485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.373681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.373712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.373855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.373885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.374082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.374113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.374253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.374283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.374427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.374457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.374669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.374700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.374953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.374984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.375249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.375281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.375475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.375505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.375627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.375658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.375855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.375887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.376022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.376052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.376311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.376343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.753 [2024-07-15 12:25:59.376482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.753 [2024-07-15 12:25:59.376513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.753 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.376796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.376826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.376965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.376996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.377133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.377164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.377287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.377317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.377464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.377494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.377780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.377811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.377951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.377981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.378107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.378137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.378276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.378307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.378450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.378481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.378735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.378766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.379045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.379075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.379355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.379390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.379619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.379650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.379802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.379832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.379967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.379998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.380197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.380256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.380508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.380550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.380743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.380774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.380948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.380978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.381171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.381201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.381324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.381354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.381492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.381522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.381746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.381776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.381999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.382029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.382223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.382262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.382407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.382437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.382654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.382685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.382881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.382913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.383121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.383153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.383350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.383382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.383581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.383612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.383871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.383902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.384157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.384188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.384399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.384430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.384580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.384610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.384831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.384861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.384988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.385019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.385212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.385255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.385444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.385475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.385681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.385712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.385841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.385870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.386006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.386035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.754 [2024-07-15 12:25:59.386233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.754 [2024-07-15 12:25:59.386265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.754 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.386474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.386506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.386651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.386681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.386834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.386863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.387074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.387108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.387247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.387279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.387406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.387436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.387705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.387736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.387889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.387918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.388147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.388177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.388423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.388454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.388709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.388740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.388881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.388912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.389102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.389133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.389422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.389453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.389679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.389711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.389945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.389976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.390118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.390149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.390341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.390373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.390523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.390554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.390806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.390837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.390972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.391002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.391192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.391223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.391359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.391390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.391525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.391555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.391689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.391720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.391871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.391900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.392108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.392137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.392340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.392371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.392517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.392547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.392729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.392760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.393037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.393067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.393368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.755 [2024-07-15 12:25:59.393401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.755 qpair failed and we were unable to recover it. 00:36:09.755 [2024-07-15 12:25:59.393638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.393669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.393925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.393955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.394094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.394125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.394254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.394285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.394429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.394458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.394597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.394628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.394881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.394915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.395122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.395152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.395289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.395326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.395461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.395491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.395681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.395712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.395898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.395928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.396140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.396171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.396346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.396377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.396569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.396599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.396793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.396823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.396962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.396993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.397129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.397161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.397348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.397380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.397638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.397669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.397802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.397833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.398036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.398065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.398324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.398355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.398573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.398602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.398796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.398826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.398961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.398990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.399235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.399267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.399478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.399509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.399648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.399679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.756 [2024-07-15 12:25:59.399899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.756 [2024-07-15 12:25:59.399930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.756 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.400147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.400178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.400522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.400554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.400822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.400853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.401076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.401108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.401412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.401444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.401571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.401602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.401886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.401918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.402043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.402073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.402235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.402267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.402532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.402565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.402701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.402732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.402927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.402958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.403154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.403184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.403328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.403359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.403490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.403520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.403661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.403693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.403845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.403876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.404060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.404090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.404342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.404380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.404570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.404600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.404728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.404758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.404968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.404999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.405134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.405164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.405297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.405329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.405527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.405557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.405696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.405727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.405985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.406019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.406324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.406356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.406619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.406649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.406808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.406838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.406961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.757 [2024-07-15 12:25:59.406992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-07-15 12:25:59.407135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.407164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.407359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.407389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.407607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.407636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.407771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.407803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.407998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.408029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.408218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.408277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.408516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.408547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.408745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.408775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.408960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.408990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.409188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.409218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.409418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.409448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.409589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.409619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.409809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.409840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.410053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.410083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.410213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.410255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.410461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.410503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.410715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.410745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.410885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.410914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-07-15 12:25:59.411105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.758 [2024-07-15 12:25:59.411136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.411422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.411467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.411760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.411803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.412130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.412174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.412346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.412386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.412608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.412652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.412938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.412972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.413122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.413154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.413359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.413394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.413525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.413561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.413750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.413780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.413957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.413988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.414109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.414140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.414289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.414321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.414605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.414637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.414825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.414853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.415056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.415085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.415287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.415321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.415457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.415487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.415613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.415646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.415778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.415808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.415930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.415961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.416156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.416186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.416398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.416432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.416567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.416597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.416877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.416907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.417100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.417132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.417339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.417371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.417481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.417511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.417715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.417744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-07-15 12:25:59.417943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.759 [2024-07-15 12:25:59.417973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.418099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.418130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.418416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.418450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.418593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.418623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.418813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.418842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.419028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.419057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.419251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.419283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.419418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.419448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.419729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.419759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.419985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.420017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.420291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.420323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.420458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.420489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.420705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.420737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.420966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.420996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.421190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.421221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.421544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.421575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.421723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.421753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.421945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.421975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.422282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.422316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.422460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.422497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.422623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.422652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.422852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.422884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.423092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.423123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.423315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.423347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.423475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.423505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.423644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.423674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.423803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.423834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.424092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.424123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.424320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.424352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.424479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.424508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.424776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.424807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.760 [2024-07-15 12:25:59.425083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.760 [2024-07-15 12:25:59.425113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.760 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.425263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.425294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.425428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.425458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.425647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.425678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.425862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.425892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.426096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.426126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.426270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.426302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.426431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.426460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.426743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.426773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.426911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.426941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.427085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.427115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.427323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.427355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.427544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.427574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.427788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.427817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.428068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.428099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.428300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.428332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.428470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.428501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.428623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.428653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.428842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.428871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.428996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.429025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.429240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.429275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.429407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.429438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.429719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.429749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.430026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.430056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.430302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.430334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.430464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.430495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.430690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.430721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.430856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.430886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.431020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.431054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.431178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.431208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.431374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.431405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.431612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.761 [2024-07-15 12:25:59.431642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.761 qpair failed and we were unable to recover it. 00:36:09.761 [2024-07-15 12:25:59.431832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.431863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.432061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.432091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.432250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.432282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.432482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.432513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.432634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.432665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.432795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.432825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.432957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.432988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.433194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.433253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.433391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.433421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.433623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.433653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.433784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.433814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.433935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.433965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.434178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.434209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.434412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.434444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.434661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.434691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.434914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.434944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.435195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.435234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.435465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.435496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.435644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.435673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.435864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.435893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.436085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.436117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.436255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.436288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.436494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.436525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.436678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.436708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.436909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.436939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.437120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.437151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.437403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.437436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.437701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.437732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.437918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.437948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.438238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.438270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.438423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.438453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.762 [2024-07-15 12:25:59.438658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.762 [2024-07-15 12:25:59.438689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.762 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.438910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.438941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.439138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.439168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.439298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.439330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.439586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.439615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.439737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.439772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.440025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.440056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.440264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.440297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.440435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.440466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.440744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.440774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.440902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.440931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.441071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.441101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.441302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.441334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.441525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.441555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.441740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.441770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.441967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.441998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.442198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.442236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.442388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.442418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.442678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.442708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.442851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.442881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.443139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.443170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.443377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.443409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.443604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.443634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.443852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.443883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.444167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.444198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.444407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.444439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.444578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.444608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.444816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.444847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.445071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.445102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.445236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.445267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.445484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.445514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.445707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.445737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.445941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.445973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.446178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.446209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.446404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.763 [2024-07-15 12:25:59.446435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.763 qpair failed and we were unable to recover it. 00:36:09.763 [2024-07-15 12:25:59.446591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.446621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.446872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.446902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.447051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.447080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.447207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.447246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.447451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.447482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.447678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.447709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.447911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.447941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.448131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.448161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.448365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.448396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.448540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.448570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.448755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.448792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.449042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.449073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.449203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.449250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.449449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.449479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.449701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.449731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.449936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.449966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.450166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.450196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.450396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.450426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.450559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.450589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.450809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.450840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.451120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.451151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.451301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.451331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.451476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.451505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.451634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.451664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.451885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.451916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.452125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.452155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.452340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.452372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.452589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.452619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.452872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.452902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.453037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.453068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.453271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.453301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.453468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.453497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.453628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.453659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.453869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.764 [2024-07-15 12:25:59.453900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.764 qpair failed and we were unable to recover it. 00:36:09.764 [2024-07-15 12:25:59.454038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.454068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.454216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.454258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.454469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.454499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.454697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.454728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.454933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.454964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.455151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.455181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.455330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.455362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.455569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.455599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.455921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.455951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.456170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.456200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.456424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.456455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.456590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.456620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.456847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.456878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.457076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.457106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.457295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.457326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.457522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.457551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.457844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.457880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.458101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.458132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.458326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.458356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.458545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.458574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.458780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.458809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.458996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.459027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.459242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.459273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.459498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.459528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.459719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.459750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.459975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.460005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.460147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.460178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.460331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.460362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.460631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.460662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.460815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.460844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.460985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.461016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.461144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.461174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.461338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.461368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.461591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.461622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.461874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.461904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.462032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.462062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.462261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.462292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.462485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.462520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.462730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.462761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.462966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.765 [2024-07-15 12:25:59.462996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.765 qpair failed and we were unable to recover it. 00:36:09.765 [2024-07-15 12:25:59.463186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.463215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.463437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.463467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.463672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.463701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.463840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.463872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.464072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.464103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.464292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.464323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.464442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.464472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.464611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.464642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.464866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.464896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.465149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.465180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.465332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.465364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.465529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.465559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.465691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.465722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.465938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.465968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.466166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.466196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.466344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.466375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.466578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.466614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.466817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.466847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.467101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.467132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.467339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.467371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.467510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.467540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.467755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.467784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.467908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.467939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.468146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.468177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.468307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.468338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.468546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.468576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.468709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.468740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.468903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.468934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.469067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.469097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.469251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.469282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.469449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.469479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.469608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.469639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.469833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.469863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.470116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.470146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.470280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.470311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.470510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.470541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.470692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.470722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.470920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.470950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.471139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.471170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.471507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.766 [2024-07-15 12:25:59.471540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.766 qpair failed and we were unable to recover it. 00:36:09.766 [2024-07-15 12:25:59.471793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.471823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.471953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.471984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.472134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.472165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.472393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.472427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.472740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.472771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.472962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.472992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.473122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.473152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.473366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.473398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.473612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.473642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.473902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.473933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.474220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.474258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.474410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.474441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.474695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.474725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.475008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.475039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.475245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.475276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.475476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.475506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.475698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.475734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.475938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.475968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.476244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.476276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.476472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.476503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.476623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.476652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.476933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.476964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.477110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.477140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.477315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.477347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.477536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.477566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.477753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.477783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.477968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.477999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.478193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.478232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.478534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.478564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.478764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.478794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.479079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.479110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.479246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.479279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.479534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.479565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.479845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.479875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.480064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.480094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.480221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.480277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.480575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.480606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.480874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.480904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.481104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.481135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.481348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.481380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.767 qpair failed and we were unable to recover it. 00:36:09.767 [2024-07-15 12:25:59.481498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.767 [2024-07-15 12:25:59.481529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.481700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.481730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.481951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.481981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.482243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.482275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.482484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.482514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.482623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.482653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.482906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.482937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.483139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.483169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.483456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.483488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.483771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.483801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.483957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.483988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.484107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.484138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.484282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.484312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.484516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.484546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.484733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.484764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.485049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.485080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.485320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.485356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.485589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.485620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.485881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.485913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.486189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.486220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.486418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.486450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.486719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.486749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.486948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.486978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.487113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.487144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.487277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.487308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.487460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.487490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.487700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.487731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.487917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.487948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.488149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.488180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.488401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.488432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.488632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.488663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.488871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.488901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.489059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.768 [2024-07-15 12:25:59.489088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.768 qpair failed and we were unable to recover it. 00:36:09.768 [2024-07-15 12:25:59.489283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.489314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.489445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.489476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.489661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.489692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.489807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.489836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.489962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.489993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.490148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.490178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.490322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.490352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.490539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.490570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.490759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.490789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.490992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.491022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.491322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.491354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.491553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.491584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.491784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.491814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.492001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.492031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.492149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.492180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.492312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.492343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.492536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.492567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.492710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.492741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.493025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.493055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.493312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.493345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.493482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.493511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.493784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.493814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.493945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.493975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.494168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.494204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.494400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.494431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.494703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.494733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.495032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.495062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.495203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.495239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.495376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.495406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.495605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.495635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.495778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.495808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.496059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.496090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.496292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.496323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.496458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.496488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.496620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.496649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.496788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.496818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.496990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.497021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.497169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.497200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.497498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.497530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.497748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.497778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.769 qpair failed and we were unable to recover it. 00:36:09.769 [2024-07-15 12:25:59.497985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.769 [2024-07-15 12:25:59.498015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.498155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.498185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.498383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.498414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.498549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.498581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.498811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.498842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.498977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.499008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.499260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.499292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.499510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.499540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.499842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.499872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.499995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.500027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.500256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.500288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.500489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.500521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.500721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.500752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.500943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.500972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.501196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.501253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.501515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.501546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.501681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.501711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.501898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.501929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.502068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.502099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.502409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.502440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.502548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.502579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.502765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.502794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.502950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.502980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.503175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.503206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.503478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.503509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.503787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.503816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.504025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.504055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.504261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.504292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.504465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.504496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.504643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.504674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.504809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.504838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.505040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.505071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.505257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.505288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.505410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.505441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.505696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.505726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.505863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.505892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.506176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.506205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.506443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.506474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.506675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.506704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.506959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.506990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.770 [2024-07-15 12:25:59.507177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.770 [2024-07-15 12:25:59.507207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.770 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.507357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.507388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.507563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.507593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.507733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.507764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.507913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.507944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.508167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.508197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.508347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.508379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.508641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.508671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.508924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.508954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.509095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.509126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.509381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.509418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.509614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.509644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.509769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.509798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.509936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.509967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.510157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.510187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.510401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.510433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.510707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.510738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.510890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.510920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.511122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.511154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.511289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.511321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.511522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.511552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.511782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.511812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.511947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.511977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.512101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.512132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.512272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.512302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.512450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.512480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.512597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.512628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.512885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.512915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.513099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.513130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.513319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.513351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.513577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.513607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.513796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.513826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.514035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.514066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.514268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.514300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.514473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.514503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.514670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.514701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.514891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.514922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.515074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.515105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.515396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.515428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.515682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.515713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.515991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.771 [2024-07-15 12:25:59.516021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.771 qpair failed and we were unable to recover it. 00:36:09.771 [2024-07-15 12:25:59.516236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.516267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.516472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.516503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.516714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.516745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.517022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.517054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.517261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.517294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.517440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.517471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.517677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.517707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.517974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.518005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.518189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.518220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.518481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.518517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.518723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.518754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.518878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.518909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.519121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.519151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.519382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.519414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.519608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.519638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.519917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.519947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.520169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.520200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.520333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.520365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.520554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.520584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.520774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.520804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.521033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.521063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.521366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.521398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.521684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.521715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.521911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.521942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.522073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.522105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.522246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.522277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.522480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.522511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.522709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.522740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.522946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.522976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.523123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.523154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.523405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.523437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.523622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.523653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.523856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.523886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.524030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.524061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.772 [2024-07-15 12:25:59.524187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.772 [2024-07-15 12:25:59.524216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.772 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.524435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.524466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.524655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.524686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.524965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.524996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.525259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.525292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.525441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.525472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.525701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.525731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.525932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.525963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.526188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.526219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.526375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.526406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.526662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.526693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.526945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.526977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.527237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.527268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.527474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.527505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.527722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.527753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.527960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.527998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.528141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.528172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.528460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.528492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.528743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.528773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.529027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.529058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.529312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.529343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.529599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.529630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.529824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.529854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.530042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.530072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.530193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.530231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.530352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.530383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.530589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.530621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.530829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.530860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.531074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.531104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.531298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.531329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.531515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.531545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.531756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.531786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.531987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.532018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.532206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.532243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.532462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.532492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.532686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.532715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.532844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.532875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.533152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.533182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.533401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.533433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.533573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.533605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.533735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.773 [2024-07-15 12:25:59.533764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.773 qpair failed and we were unable to recover it. 00:36:09.773 [2024-07-15 12:25:59.533904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.533935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.534127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.534158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.534359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.534392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.534664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.534695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.534831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.534861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.535055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.535085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.535287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.535318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.535470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.535500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.535636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.535667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.535924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.535954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.536237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.536269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.536411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.536441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.536562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.536592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.536744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.536775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.536970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.537006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.537265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.537298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.537494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.537524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.537658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.537689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.537889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.537920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.538066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.538095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.538370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.538402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.538602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.538632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.538835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.538865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.539087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.539117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.539245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.539276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.539475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.539506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.539708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.539738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.539926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.539957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.540169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.540198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.540404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.540434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.540625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.540655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.540842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.540873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.541007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.541038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.541260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.541294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.541443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.541473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.541666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.541697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.541842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.541871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.542070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.542101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.542245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.542277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.542558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.542589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.774 [2024-07-15 12:25:59.542777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.774 [2024-07-15 12:25:59.542807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.774 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.542946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.542976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.543109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.543139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.543330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.543362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.543508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.543537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.543721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.543750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.543896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.543926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.544140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.544172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.544320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.544351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.544605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.544636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.544821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.544852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.545110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.545141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.545410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.545442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.545642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.545673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.545819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.545856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.546053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.546083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.546340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.546371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.546623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.546654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.546787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.546816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.547025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.547056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.547312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.547344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.547491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.547522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.547725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.547755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.547897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.547927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.548205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.548242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.548377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.548407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.548632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.548662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.548913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.548942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.549155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.549185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.549390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.549420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.549624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.549655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.549845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.549875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.550018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.550049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.550324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.550355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.550634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.550666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.550884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.550916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.551110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.551141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.551417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.551449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.551584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.551615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.551799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.551830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.552108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.552139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.775 qpair failed and we were unable to recover it. 00:36:09.775 [2024-07-15 12:25:59.552366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.775 [2024-07-15 12:25:59.552399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.552657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.552688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.552839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.552869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.553090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.553120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.553326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.553356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.553487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.553518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.553751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.553782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.554053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.554083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.554221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.554261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.554517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.554548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.554764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.554794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.555048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.555079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.555295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.555327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.555480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.555516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.555714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.555744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.555944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.555974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.556129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.556159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.556436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.556467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.556676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.556707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.556927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.556958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.557244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.557276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.557470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.557500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.557779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.557810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.557949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.557979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.558174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.558204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.558492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.558523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.558719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.558749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.558969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.559001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.559166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.559197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.559324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.559355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.559506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.559537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.559684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.559713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.559838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.559868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.560076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.560107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.776 qpair failed and we were unable to recover it. 00:36:09.776 [2024-07-15 12:25:59.560340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.776 [2024-07-15 12:25:59.560371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.560570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.560601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.560791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.560821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.560956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.560987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.561185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.561214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.561444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.561475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.561681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.561712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.561842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.561872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.562127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.562157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.562413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.562445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.562600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.562632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.562832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.562863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.562982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.563012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.563209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.563248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.563381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.563411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.563610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.563640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.563842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.563872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.563997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.564027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.564236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.564268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.564555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.564591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.564781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.564812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.565008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.565038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.565163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.565193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.565348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.565380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.565637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.565668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.565789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.565820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.566081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.566111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.566260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.566291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.566476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.566507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.566704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.566735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.566927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.566957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.567147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.567177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.567391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.567422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.567653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.567683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.567937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.567968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.568192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.568222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.568446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.568477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.568628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.568658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.568956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.568987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.777 [2024-07-15 12:25:59.569177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.777 [2024-07-15 12:25:59.569208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.777 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.569429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.569461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.569589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.569619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.569848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.569878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.570000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.570030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.570244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.570276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.570419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.570450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.570678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.570710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.570990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.571021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.571158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.571187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.571327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.571358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.571582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.571612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.571742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.571773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.571909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.571938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.572137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.572167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.572373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.572404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.572681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.572712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.572899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.572929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.573062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.573092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.573292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.573325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.573459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.573494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.573633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.573665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.573852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.573883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.574073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.574103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.574308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.574340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.574528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.574559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.574684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.574715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.574861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.574892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.575038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.575068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.575345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.575377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.575601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.575631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.575785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.575815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.575961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.575990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.576114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.576144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.576371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.576402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.576605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.576635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.576824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.576854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.577141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.577170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.577383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.577415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.577608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.577639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.778 qpair failed and we were unable to recover it. 00:36:09.778 [2024-07-15 12:25:59.577841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.778 [2024-07-15 12:25:59.577872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.578126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.578156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.578362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.578395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.578529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.578558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.578750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.578781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.579035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.579065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.579345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.579376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.579584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.579615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.579835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.579865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.580015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.580045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.580164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.580194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.580336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.580367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.580599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.580630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.580906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.580936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.581066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.581096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.581306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.581337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.581464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.581494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.581621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.581651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.581864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.581894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.582098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.582129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.582341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.582377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.582518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.582548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.582676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.582705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.582901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.582932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.583074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.583105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.583292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.583323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.583521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.583551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.583842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.583872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.584131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.584161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.584371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.584402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.584596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.584627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.584761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.584791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.585001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.585032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.585291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.585322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.585583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.585614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.585840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.585870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.586012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.586042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.586296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.586327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.586457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.586488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.586628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.586657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.779 qpair failed and we were unable to recover it. 00:36:09.779 [2024-07-15 12:25:59.586794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.779 [2024-07-15 12:25:59.586823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.587046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.587077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.587185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.587215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.587426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.587456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.587652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.587682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.587807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.587837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.588055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.588086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.588344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.588377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.588494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.588525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.588727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.588757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.588935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.588964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.589165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.589197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.589464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.589495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.589751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.589782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.590038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.590068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.590259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.590290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.590475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.590506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.590659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.590690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.590943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.590973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.591279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.591309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.591509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.591544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.591764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.591795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.591917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.591948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.592238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.592271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.592467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.592497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.592648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.592678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.592933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.592963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.593168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.593198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.593419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.593450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.593682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.593712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.593853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.593882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.594143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.594174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.594386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.594418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.594565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.594596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.594796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.594827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.595046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.595076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.595335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.595367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.595488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.595517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.595655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.595685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.595937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.595967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.596159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.780 [2024-07-15 12:25:59.596190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.780 qpair failed and we were unable to recover it. 00:36:09.780 [2024-07-15 12:25:59.596453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.596484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.596752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.596783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.596921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.596952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.597157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.597188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.597489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.597520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.597657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.597689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.597865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.597934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.598144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.598179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.598335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.598369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.598623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.598654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.598778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.598810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.599101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.599131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.599292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.599323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.599531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.599562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.599695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.599725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.600006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.600037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.600317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.600349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.600552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.600583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.600708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.600739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.600942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.600973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.601260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.601292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.601546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.601576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.601730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.601761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.601979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.602010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.602292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.602324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.781 qpair failed and we were unable to recover it. 00:36:09.781 [2024-07-15 12:25:59.602464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.781 [2024-07-15 12:25:59.602495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.602751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.602783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.603037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.603068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.603192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.603222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.603468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.603499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.603626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.603658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.603874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.603904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.604032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.604063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.604198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.604244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.604453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.604484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.604684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.604715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.604841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.604871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.605010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.605042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.605183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.605214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.605427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.605458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.605655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.605687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.605824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.605855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.605993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.606024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.606237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.606270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.606398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.606430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.606557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.606588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.606785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.606816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.607035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.607067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.607211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.607251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.607438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.607468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.607721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.607753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.607880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.607911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.608115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.608146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.608337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.608383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.608571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.608603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.608742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.608773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.609027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.609057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.609343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.609378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.609525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.609555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.609691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.609721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.609976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.610013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.610141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.610171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.610390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.610422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.610553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.610584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.610775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.610806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.782 [2024-07-15 12:25:59.611078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.782 [2024-07-15 12:25:59.611109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.782 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.611307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.611338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.611472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.611504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.611732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.611762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.612017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.612048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.612276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.612308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.612449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.612479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.612613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.612643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.612865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.612895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.613088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.613119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.613322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.613353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.613618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.613648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.613844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.613875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.614100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.614131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.614255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.614287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.614547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.614578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.614769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.614799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.614935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.614965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.615160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.615190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.615418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.615449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.615581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.615611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.615814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.615843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.616059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.616094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.616285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.616317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.616464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.616496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.616684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.616715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.616971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.617001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.617125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.617156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.617377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.617408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.617675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.617705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.617841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.617871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.783 [2024-07-15 12:25:59.618072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.783 [2024-07-15 12:25:59.618102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.783 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.618236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.618268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.618471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.618500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.618702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.618733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.618933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.618964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.619167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.619198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.619342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.619374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.619636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.619667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.619859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.619890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.620071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.620102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.620309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.620341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.620481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.620512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.620697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.620727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.620873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.620903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.621100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.621131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.621410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.621442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.621727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.621758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.622022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.622052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.622235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.622267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.622541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.622572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.622843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.622873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.623072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.623104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.623316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.623347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.623496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.623526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.623726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.623756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.623890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.623921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.624181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.624212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.624363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.624394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.624590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.624620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.624807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.624837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.624984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.625015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.625142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.625174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.625440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.625476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.625733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.625764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.625971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.626002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.626137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.626170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.626409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.626442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.626571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.626601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.626790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.626821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.627013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.627044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.627198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.627238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.627493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.784 [2024-07-15 12:25:59.627524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.784 qpair failed and we were unable to recover it. 00:36:09.784 [2024-07-15 12:25:59.627747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.627778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.627887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.627918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.628059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.628089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.628214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.628255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.628396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.628428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.628618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.628649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.628839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.628870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.628996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.629026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.629222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.629281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.629503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.629534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.629785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.629816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.629957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.629987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.630252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.630283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.630473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.630504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.630728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.630759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.631048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.631078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.631284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.631316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.631525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.631562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.631756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.631787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.631984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.632014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.632269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.632300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.632431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.632462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.632635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.632666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.632880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.632911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.633188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.633219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.633440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.633471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.633590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.633620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.633770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.633800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.633924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.633954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.634250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.634283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.634428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.634459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.634611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.785 [2024-07-15 12:25:59.634642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.785 qpair failed and we were unable to recover it. 00:36:09.785 [2024-07-15 12:25:59.634780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.634810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.635006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.635037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.635184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.635215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.635328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.635359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.635636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.635666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.635790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.635820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.636082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.636112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.636284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.636315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.636551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.636583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.636784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.636815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.636938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.636967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.637242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.637274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.637550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.637586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.637720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.637750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.637938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.637969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.638179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.638210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.638360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.638392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.638513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.638543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.638850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.638880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.639099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.639131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.639331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.639362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.639516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.639547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.639755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.639785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.639977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.640008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.640263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.640294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.640435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.640466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.640724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.640755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.641012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.641042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.641241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.641272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.641458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.641489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.641681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.641712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.641862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.641892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.642083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.642114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.642299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.642330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.786 [2024-07-15 12:25:59.642553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.786 [2024-07-15 12:25:59.642584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.786 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.642727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.642757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.642960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.642990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.643161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.643191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.643403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.643435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.643640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.643670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.643802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.643833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.643944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.643975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.644201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.644241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.644497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.644527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.644731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.644761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.644983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.645014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.645238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.645269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.645545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.645576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.645789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.645820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.645949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.645979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.646148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.646179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.646394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.646426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.646653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.646683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.646888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.646919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.647049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.647079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.647219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.647260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.647539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.647570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.647799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.647830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.648048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.648079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.648295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.648328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.648477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.648507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.648696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.648727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.648916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.648947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.649144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.649175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.649308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.649337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.649551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.649581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.649782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.649813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.650074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.650105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.650241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.650273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.650406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.650437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.650595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.650626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.650820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.650851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.650998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.651029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.651239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.651271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.651568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.787 [2024-07-15 12:25:59.651599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.787 qpair failed and we were unable to recover it. 00:36:09.787 [2024-07-15 12:25:59.651744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.651775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.651920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.651950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.652140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.652170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.652367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.652399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.652588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.652620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.652874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.652920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.653070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.653101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.653369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.653402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.653678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.653709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.653908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.653939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.654092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.654123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.654404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.654435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.654642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.654673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.654936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.654966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.655168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.655198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.655464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.655496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.655639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.655670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.655968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.655999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.656209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.656249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.656473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.656504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.656756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.656787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.656977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.657007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.657146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.657177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.657373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.657405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.657606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.657637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.657918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.657949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.658199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.658238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.658390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.658421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.658603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.658634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.658834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.658865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.659009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.659041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.659243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.659273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.659464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.659503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.659708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.659739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.659908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.659939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.660192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.788 [2024-07-15 12:25:59.660223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.788 qpair failed and we were unable to recover it. 00:36:09.788 [2024-07-15 12:25:59.660444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.660475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.660663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.660694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.660967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.660998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.661187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.661217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.661431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.661462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.661658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.661688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.661925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.661956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.662159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.662190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.662437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.662469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.662595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.662625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.662822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.662853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.663054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.663085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.663276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.663309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.663498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.663529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.663668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.663698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.663977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.664008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.664196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.664235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.664488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.664518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.664771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.664802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.665011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.665042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.665185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.665215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.665413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.665444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.665698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.665728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.665943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.665978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.666182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.666212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.666443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.666474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.666662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.666693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.666838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.666869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.667085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.667115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.667319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.667351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.789 [2024-07-15 12:25:59.667564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.789 [2024-07-15 12:25:59.667594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.789 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.667785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.667816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.667954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.667984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.668133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.668163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.668363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.668395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.668621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.668652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.668784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.668815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.669025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.669057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.669268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.669300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.669490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.669521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.669702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.669732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.669921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.669952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.670097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.670128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.670323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.670355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.670551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.670582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.670769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.670800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.671055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.671086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.671285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.671317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.671581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.671612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.671785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.671816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.672043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.672074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.672230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.672262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.672480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.672512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.672648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.672679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.672820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.672851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.672995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.673024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.673150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.673181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.673394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.673425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.673571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.673602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.673726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.673757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.674043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.674073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.674202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.674239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.674465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.674495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.674703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.674734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.674916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.674953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.790 [2024-07-15 12:25:59.675174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.790 [2024-07-15 12:25:59.675204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.790 qpair failed and we were unable to recover it. 00:36:09.791 [2024-07-15 12:25:59.675490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.791 [2024-07-15 12:25:59.675521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.791 qpair failed and we were unable to recover it. 00:36:09.791 [2024-07-15 12:25:59.675726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.791 [2024-07-15 12:25:59.675756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.791 qpair failed and we were unable to recover it. 00:36:09.791 [2024-07-15 12:25:59.675966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.791 [2024-07-15 12:25:59.675996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:09.791 qpair failed and we were unable to recover it. 00:36:10.070 [2024-07-15 12:25:59.676197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-15 12:25:59.676237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.070 qpair failed and we were unable to recover it. 00:36:10.070 [2024-07-15 12:25:59.676519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-15 12:25:59.676551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.070 qpair failed and we were unable to recover it. 00:36:10.070 [2024-07-15 12:25:59.676762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-15 12:25:59.676791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.070 qpair failed and we were unable to recover it. 00:36:10.070 [2024-07-15 12:25:59.677023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-15 12:25:59.677054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.070 qpair failed and we were unable to recover it. 00:36:10.070 [2024-07-15 12:25:59.677191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-15 12:25:59.677222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.070 qpair failed and we were unable to recover it. 00:36:10.070 [2024-07-15 12:25:59.677431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-15 12:25:59.677462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.070 qpair failed and we were unable to recover it. 00:36:10.070 [2024-07-15 12:25:59.677620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-15 12:25:59.677651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.070 qpair failed and we were unable to recover it. 00:36:10.070 [2024-07-15 12:25:59.677793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-15 12:25:59.677823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.070 qpair failed and we were unable to recover it. 00:36:10.070 [2024-07-15 12:25:59.678101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.070 [2024-07-15 12:25:59.678131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.678279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.678312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.678520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.678551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.678682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.678712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.678885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.678917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.679112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.679142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.679367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.679399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.679571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.679601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.679732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.679763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.679963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.679994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.680213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.680263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.680459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.680489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.680671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.680701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.681003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.681035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.681184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.681219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.681434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.681465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.681740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.681771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.682036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.682066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.682294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.682326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.682523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.682553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.682673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.682703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.682900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.682931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.683204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.683245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.683443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.683473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.683676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.683706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.683856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.683886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.684085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.684115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.684368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.684400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.684660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.684691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.684917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.684948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.685203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.685241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.685373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.685404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.685655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.685685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.685906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.685937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.686149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.686179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.686403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.686435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.686628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.686658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.686881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.686912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.687122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.071 [2024-07-15 12:25:59.687152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.071 qpair failed and we were unable to recover it. 00:36:10.071 [2024-07-15 12:25:59.687406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.687438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.687660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.687692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.687820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.687855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.688067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.688098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.688240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.688273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.688423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.688453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.688586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.688617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.688870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.688900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.689173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.689204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.689348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.689379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.689515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.689545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.689754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.689785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.689977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.690008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.690139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.690169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.690323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.690355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.690558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.690590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.690794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.690825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.691012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.691042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.691240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.691273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.691462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.691493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.691699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.691730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.691880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.691911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.692119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.692150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.692339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.692371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.692677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.692708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.692834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.692865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.693058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.693090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.693357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.693388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.693583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.693614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.693887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.693918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.694116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.694146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.694344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.694376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.694577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.694607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.694836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.694866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.695058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.695089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.695344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.695376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.695573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.695604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.695830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.695861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.072 qpair failed and we were unable to recover it. 00:36:10.072 [2024-07-15 12:25:59.696079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.072 [2024-07-15 12:25:59.696110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.696337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.696369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.696510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.696540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.696743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.696774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.696988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.697018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.697275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.697307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.697494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.697525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.697753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.697783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.697967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.697997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.698204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.698241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.698456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.698486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.698611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.698641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.698842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.698872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.699148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.699178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.699316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.699347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.699623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.699653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.699792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.699823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.700100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.700130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.700267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.700299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.700499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.700531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.700753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.700784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.700991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.701022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.701241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.701272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.701495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.701526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.701671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.701702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.702007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.702038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.702178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.702209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.702486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.702517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.702651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.702681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.702813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.702844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.702986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.703017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.703289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.703321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.703518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.703554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.703758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.703788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.704039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.704069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.704272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.704303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.704590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.704620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.704752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.704783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.704983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.705013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.073 [2024-07-15 12:25:59.705222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.073 [2024-07-15 12:25:59.705260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.073 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.705394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.705425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.705648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.705678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.705883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.705913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.706097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.706128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.706277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.706309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.706503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.706534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.706823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.706855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.707052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.707083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.707336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.707368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.707591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.707621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.707825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.707856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.708070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.708100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.708244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.708276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.708554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.708584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.708772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.708802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.708987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.709018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.709241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.709272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.709414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.709456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.709654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.709685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.709890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.709926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.710131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.710161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.710379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.710410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.710617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.710649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.710853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.710883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.711090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.711120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.711338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.711370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.711649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.711680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.711817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.074 [2024-07-15 12:25:59.711848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.074 qpair failed and we were unable to recover it. 00:36:10.074 [2024-07-15 12:25:59.711995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.712025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.712145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.712176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.712497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.712531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.712690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.712720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.712990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.713020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.713240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.713272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.713476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.713506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.713786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.713817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.714095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.714125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.714267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.714300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.714455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.714486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.714759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.714789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.715036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.715067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.715278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.715309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.715444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.715474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.715659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.715690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.715882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.715912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.716052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.716083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.716274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.716310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.716441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.716471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.075 qpair failed and we were unable to recover it. 00:36:10.075 [2024-07-15 12:25:59.716675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.075 [2024-07-15 12:25:59.716706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.716861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.716892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.717065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.717095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.717240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.717272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.717471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.717502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.717718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.717749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.717901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.717931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.718139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.718170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.718382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.718415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.718620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.718650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.718903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.718934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.719126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.719156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.719311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.719344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.719472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.719502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.719637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.719668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.719852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.719882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.720009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.720040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.720246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.720277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.720481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.720510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.720735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.720765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.720955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.720985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.721135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.721165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.721337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.721369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.721559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.721588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.721787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.721818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.721969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.722000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.722208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.076 [2024-07-15 12:25:59.722246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.076 qpair failed and we were unable to recover it. 00:36:10.076 [2024-07-15 12:25:59.722501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.722531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.722664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.722694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.722878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.722908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.723090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.723120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.723319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.723352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.723609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.723641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.723921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.723952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.724101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.724131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.724349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.724380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.724632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.724663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.724948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.724979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.725117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.725148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.725359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.725398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.725606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.725636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.725841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.725872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.726066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.726097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.726379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.726411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.726552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.726582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.726784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.726814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.727075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.727106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.727385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.727417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.727611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.727642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.727893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.727924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.728126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.728156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.728355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.728387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.728578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.728608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.728753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.728784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.077 [2024-07-15 12:25:59.728971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.077 [2024-07-15 12:25:59.729002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.077 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.729141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.729171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.729458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.729489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.729787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.729817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.729965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.729995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.730218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.730264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.730381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.730412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.730694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.730724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.730861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.730890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.731086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.731116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.731331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.731363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.731552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.731582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.731721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.731757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.731903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.731934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.732204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.732244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.732380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.732410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.732619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.732650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.732836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.732866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.733120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.733150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.733402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.733433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.733629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.733660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.733853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.733883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.734133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.734164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.734354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.734385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.734583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.734613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.734816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.734847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.735048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.735078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.078 qpair failed and we were unable to recover it. 00:36:10.078 [2024-07-15 12:25:59.735378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.078 [2024-07-15 12:25:59.735410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.735529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.735559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.735760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.735790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.735977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.736007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.736221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.736258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.736382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.736413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.736585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.736616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.736916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.736947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.737138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.737169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.737300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.737330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.737518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.737548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.737738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.737769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.738026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.738061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.738250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.738282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.738535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.738564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.738813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.738844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.739097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.739126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.739378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.739409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.739530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.739561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.739815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.739845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.740043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.740073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.740276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.740308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.740508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.740539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.740745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.740776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.740925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.740956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.741144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.741174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.079 [2024-07-15 12:25:59.741387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.079 [2024-07-15 12:25:59.741418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.079 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.741553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.741582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.741795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.741826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.741964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.741995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.742209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.742245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.742381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.742412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.742534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.742564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.742771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.742801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.742929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.742960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.743153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.743184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.743425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.743456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.743664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.743694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.743895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.743926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.744050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.744082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.744341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.744374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.744593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.744624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.744822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.744852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.744999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.745029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.745182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.745213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.745433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.745464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.745664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.745695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.745897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.745927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.746132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.746163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.746418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.746450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.746657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.746687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.746905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.746936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.747131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.747162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.747433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.080 [2024-07-15 12:25:59.747467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.080 qpair failed and we were unable to recover it. 00:36:10.080 [2024-07-15 12:25:59.747670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.747701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.747915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.747945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.748234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.748266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.748523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.748554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.748850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.748880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.749094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.749125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.749263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.749295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.749501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.749531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.749666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.749696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.749837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.749867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.750004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.750036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.750290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.750321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.750514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.750544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.750689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.750720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.750976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.751006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.751159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.751189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.751348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.751379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.751502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.751533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.751725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.751755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.752008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.752038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.752290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.752321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.752518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.752549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.752864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.752894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.753081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.753113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.753249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.753281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.753528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.753560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.753691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.081 [2024-07-15 12:25:59.753727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.081 qpair failed and we were unable to recover it. 00:36:10.081 [2024-07-15 12:25:59.753922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.753953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.754145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.754175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.754385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.754417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.754693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.754724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.754924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.754954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.755155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.755186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.755396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.755427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.755619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.755650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.755912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.755943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.756215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.756254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.756522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.756553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.756806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.756836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.757029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.757061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.757259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.757291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.757430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.757460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.757736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.757767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.757893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.757923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.758178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.758209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.758383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.758414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.758697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.758728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.758862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.758892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.082 qpair failed and we were unable to recover it. 00:36:10.082 [2024-07-15 12:25:59.759081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.082 [2024-07-15 12:25:59.759112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.759235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.759267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.759474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.759510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.759748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.759779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.759958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.759988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.760205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.760248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.760461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.760492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.760628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.760658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.760858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.760889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.761058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.761090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.761288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.761319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.761523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.761554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.761757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.761788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.762011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.762042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.762321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.762352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.762631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.762661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.762849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.762880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.763077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.763108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.763386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.763416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.763622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.763652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.763838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.763868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.764057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.764087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.764289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.764319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.764444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.764475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.764632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.764662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.764968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.764999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.765145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-07-15 12:25:59.765176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.083 qpair failed and we were unable to recover it. 00:36:10.083 [2024-07-15 12:25:59.765392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.765422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.765727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.765757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.766033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.766064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.766282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.766315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.766447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.766478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.766631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.766663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.766946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.766977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.767109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.767140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.767276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.767308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.767440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.767470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.767616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.767646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.767798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.767828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.767944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.767975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.768169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.768200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.768399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.768430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.768552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.768582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.768764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.768796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.769003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.769034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.769234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.769267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.769526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.769558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.769814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.769844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.769964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.769995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.770250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.770283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.770428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.770458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.770599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.770630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.770828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.770858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.771065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.771096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.771283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.771314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.771503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.771534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.771681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.771712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.771912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.771942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.772196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.772233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.772441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-07-15 12:25:59.772471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.084 qpair failed and we were unable to recover it. 00:36:10.084 [2024-07-15 12:25:59.772676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.772707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.772897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.772927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.773192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.773223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.773439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.773469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.773752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.773783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.773923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.773955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.774100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.774131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.774267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.774298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.774575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.774606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.774720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.774751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.775004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.775035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.775309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.775342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.775539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.775570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.775767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.775801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.775934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.775965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.776155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.776186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.776357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.776391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.776541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.776571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.776845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.776876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.777178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.777208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.777408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.777439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.777573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.777605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.777793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.777824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.778101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.778131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.778266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.778298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.778426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.778456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.778662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.778693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.778997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.779027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.779287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.779318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.779528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.779558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.779818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.779849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.780059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.780090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.085 [2024-07-15 12:25:59.780365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-07-15 12:25:59.780396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.085 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.780596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.780626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.780900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.780930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.781130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.781160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.781412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.781444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.781639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.781670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.781876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.781906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.782108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.782138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.782337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.782375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.782650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.782681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.782834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.782865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.783067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.783099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.783289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.783321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.783532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.783563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.783863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.783893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.784029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.784059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.784191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.784221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.784363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.784394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.784646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.784676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.784868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.784898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.785118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.785149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.785350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.785382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.785690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.785721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.785907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.785938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.786163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.786194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.786458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.786490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.786696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.786727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.787001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.787032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.787237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.787269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.787402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.787433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.787710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.787741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.787891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.787921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.788054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.788084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.788282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.788316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.788504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.788534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.788675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.788712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.788853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.788885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.789084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.789116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.086 qpair failed and we were unable to recover it. 00:36:10.086 [2024-07-15 12:25:59.789243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.086 [2024-07-15 12:25:59.789275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.789466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.789496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.789633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.789663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.789871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.789901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.790096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.790127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.790311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.790342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.790454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.790483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.790688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.790719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.790914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.790945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.791102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.791132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.791272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.791304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.791501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.791532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.791729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.791761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.792037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.792067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.792292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.792324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.792462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.792492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.792633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.792664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.792862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.792892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.793108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.793139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.793312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.793344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.793448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.793478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.793674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.793704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.793908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.793938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.794079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.794109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.794359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.794390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.794586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.794617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.794895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.794925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.795116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.795146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.795361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.795392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.795523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.795554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.795694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.087 [2024-07-15 12:25:59.795725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.087 qpair failed and we were unable to recover it. 00:36:10.087 [2024-07-15 12:25:59.795852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.795882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.796018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.796049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.796244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.796276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.796471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.796502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.796701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.796731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.796929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.796960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.797156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.797186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.797393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.797424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.797637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.797667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.797853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.797883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.798096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.798127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.798310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.798341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.798607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.798637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.798899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.798929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.799130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.799160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.799414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.799445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.799579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.799610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.799748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.799777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.799925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.799954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.800181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.800211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.800432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.800463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.800670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.800701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.800970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.800997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.801230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.801258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.801465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.801492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.801694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.801721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.801972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.801999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.802263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.802292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.802544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.802572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.802693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.802721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.802911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.802939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.803149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.803177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.803390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.803419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.803614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.803641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.803878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.803911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.804119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.804147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.804289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.804319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.804519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.804548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.804803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.804831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.088 [2024-07-15 12:25:59.804966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.088 [2024-07-15 12:25:59.804994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.088 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.805146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.805176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.805370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.805399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.805651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.805679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.805905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.805935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.806064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.806095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.806235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.806267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.806423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.806453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.806685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.806715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.807001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.807032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.807335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.807367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.807494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.807525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.807633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.807665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.807883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.807913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.808098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.808128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.808336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.808367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.808567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.808597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.808816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.808847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.809059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.809089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.809212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.809250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.809448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.809478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.809732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.809766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.810046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.810082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.810316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.810348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.810549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.810580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.810713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.810743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.810950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.810981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.811175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.811205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.811415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.811446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.811663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.811694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.811914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.811943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.812087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.812118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.812304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.812336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.812527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.812557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.812705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.812736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.812949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.812979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.813186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.813216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.813414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.813445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.813594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.813625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.089 [2024-07-15 12:25:59.813775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.089 [2024-07-15 12:25:59.813805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.089 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.814031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.814062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.814254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.814286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.814502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.814532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.814646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.814676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.814881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.814911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.815191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.815221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.815440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.815471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.815688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.815718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.815907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.815938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.816248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.816280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.816525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.816560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.816754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.816784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.816975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.817005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.817207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.817247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.817430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.817460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.817608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.817638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.817777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.817807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.818066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.818096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.818350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.818381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.818583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.818614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.818811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.818841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.819061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.819091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.819242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.819274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.819473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.819504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.819701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.819732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.819923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.819955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.820093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.820123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.820307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.820339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.820467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.820498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.820724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.820754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.820956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.820986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.821203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.821239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.821522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.821552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.821699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.821730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.821874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.821905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.822095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.822126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.822259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.822290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.822440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.822471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.090 [2024-07-15 12:25:59.822678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.090 [2024-07-15 12:25:59.822708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.090 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.822900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.822930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.823151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.823181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.823338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.823369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.823568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.823598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.823904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.823935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.824136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.824166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.824288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.824319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.824535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.824565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.824768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.824798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.824936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.824966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.825264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.825296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.825554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.825589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.825796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.825826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.826048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.826077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.826274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.826305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.826574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.826605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.826737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.826768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.827031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.827061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.827299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.827330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.827529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.827560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.827773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.827803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.827940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.827971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.828223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.828262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.828392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.828423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.828608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.828639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.828775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.828805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.828931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.828962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.829237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.829269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.829404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.829435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.829719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.829749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.830003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.830034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.830219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.830258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.830463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.830493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.830616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.830645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.830913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.830944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.831072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.831102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.831269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.831301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.831575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.831605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.091 [2024-07-15 12:25:59.831837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.091 [2024-07-15 12:25:59.831872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.091 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.832066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.832096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.832280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.832311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.832564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.832595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.832734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.832764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.832972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.833002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.833149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.833179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.833459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.833490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.833680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.833711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.833873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.833903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.834096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.834126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.834343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.834374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.834518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.834548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.834735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.834766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.834898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.834929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.835063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.835093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.835276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.835308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.835424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.835454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.835655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.835685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.835891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.835921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.836112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.836142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.836291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.836323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.836448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.836479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.836628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.836659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.836865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.836895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.837085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.837116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.837305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.837336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.837528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.837563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.837768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.837798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.837931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.837961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.838161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.838192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.838431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.838462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.838645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.838675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.092 qpair failed and we were unable to recover it. 00:36:10.092 [2024-07-15 12:25:59.838796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.092 [2024-07-15 12:25:59.838827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.839029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.839059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.839211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.839253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.839509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.839540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.839742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.839772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.839962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.839994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.840178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.840208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.840343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.840374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.840524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.840555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.840836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.840866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.841017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.841047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.841243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.841276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.841464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.841495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.841684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.841714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.841840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.841870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.842074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.842104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.842292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.842323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.842601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.842632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.842778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.842808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.843016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.843047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.843191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.843222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.843482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.843513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.843648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.843678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.843870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.843900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.844094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.844125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.844377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.844408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.844530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.844560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.844763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.844793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.844984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.845014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.845198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.845238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.845461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.845491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.845687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.845717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.845902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.845934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.846056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.846086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.846285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.846317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.846482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.846514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.846709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.846740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.846934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.846965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.847173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.093 [2024-07-15 12:25:59.847204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.093 qpair failed and we were unable to recover it. 00:36:10.093 [2024-07-15 12:25:59.847423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.847455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.847602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.847633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.847843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.847874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.848042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.848073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.848330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.848361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.848558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.848590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.848788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.848820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.849038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.849069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.849297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.849329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.849588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.849618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.849788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.849819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.849967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.849997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.850122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.850152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.850354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.850387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.850644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.850674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.850882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.850913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.851168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.851198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.851359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.851391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.851538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.851569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.851767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.851797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.851987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.852018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.852144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.852175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.852314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.852345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.852547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.852583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.852725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.852755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.852946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.852977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.853173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.853204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.853346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.853377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.853566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.853597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.853808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.853838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.853985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.854015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.854209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.854249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.854383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.854414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.854550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.854580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.854772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.854802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.854923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.854953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.855212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.855253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.855510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.855541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.094 [2024-07-15 12:25:59.855744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.094 [2024-07-15 12:25:59.855775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.094 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.855994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.856024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.856278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.856311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.856439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.856469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.856673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.856703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.856860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.856890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.857146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.857176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.857331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.857363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.857506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.857537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.857734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.857764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.857892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.857923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.858113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.858143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.858406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.858443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.858663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.858693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.858832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.858863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.858994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.859024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.859153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.859184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.859382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.859414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.859548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.859579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.859799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.859833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.859976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.860007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.860141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.860172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.860318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.860350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.860641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.860672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.860791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.860822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.861096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.861127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.861338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.861369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.861501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.861532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.861794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.861824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.861959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.861989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.862119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.862150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.862356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.862387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.862507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.862538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.862771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.862801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.862947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.862977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.863190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.863220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.863382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.863413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.863567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.863597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.863721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.863751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.863879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.863910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.095 qpair failed and we were unable to recover it. 00:36:10.095 [2024-07-15 12:25:59.864043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.095 [2024-07-15 12:25:59.864074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.864287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.864318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.864540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.864571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.864804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.864834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.864966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.864997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.865192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.865222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.865356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.865387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.865644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.865675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.865868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.865897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.866120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.866151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.866278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.866310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.866449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.866479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.866663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.866694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.866908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.866939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.867063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.867093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.867296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.867328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.867576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.867606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.867745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.867775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.867968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.867999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.868205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.868243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.868370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.868408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.868625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.868655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.868883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.868914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.869172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.869202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.869334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.869364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.869505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.869536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.869724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.869755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.869962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.869993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.870196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.870236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.870493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.870525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.870659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.870689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.870821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.870851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.870981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.871011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.096 [2024-07-15 12:25:59.871201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.096 [2024-07-15 12:25:59.871241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.096 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.871411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.871442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.871648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.871679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.871980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.872011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.872247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.872279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.872473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.872504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.872721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.872752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.872897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.872933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.873054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.873085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.873214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.873255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.873395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.873425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.873704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.873736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.873937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.873968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.874159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.874189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.874367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.874399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.874526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.874557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.874773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.874803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.875002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.875032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.875318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.875350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.875547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.875578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.875764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.875794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.876029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.876060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.876262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.876294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.876530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.876561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.876711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.876742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.877001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.877031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.877167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.877197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.877335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.877366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.877495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.877526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.877630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.877661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.877848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.877878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.878177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.878207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.878429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.878460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.878653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.878684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.878836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.878871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.879138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.879168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.879377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.879409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.879599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.879629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.879768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.879799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.097 [2024-07-15 12:25:59.880050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.097 [2024-07-15 12:25:59.880081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.097 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.880286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.880319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.880506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.880538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.880750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.880780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.880918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.880948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.881133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.881164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.881359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.881390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.881575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.881605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.881816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.881847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.882055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.882085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.882206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.882256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.882465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.882496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.882705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.882736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.882927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.882958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.883150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.883180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.883378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.883409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.883607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.883638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.883783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.883813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.883938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.883969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.884153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.884184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.884331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.884362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.884557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.884587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.884863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.884902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.885089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.885121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.885319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.885350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.885641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.885671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.885931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.885961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.886101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.886132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.886284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.886316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.886614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.886645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.886862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.886892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.887084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.887115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.887244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.887275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.887408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.887439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.887636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.887666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.887907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.887937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.888125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.888193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.888480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.888515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.888673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.888705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.888903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.888934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.889123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.889154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.098 [2024-07-15 12:25:59.889387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.098 [2024-07-15 12:25:59.889419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.098 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.889701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.889731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.889931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.889963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.890163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.890194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.890415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.890447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.890725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.890756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.890981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.891012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.891163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.891194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.891404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.891443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.891697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.891727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.892008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.892039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.892179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.892210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.892358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.892389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.892527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.892558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.892700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.892732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.892868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.892899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.893112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.893143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.893282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.893315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.893620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.893651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.893904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.893935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.894057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.894088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.894223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.894263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.894405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.894437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.894611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.894641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.894827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.894858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.895002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.895033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.895153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.895183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.895394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.895427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.895554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.895586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.895726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.895757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.895884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.895915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.896157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.896188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.896468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.896500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.896701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.896732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.896940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.896971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.897242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.897275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.897433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.897464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.897729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.897760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.897970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.898001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.898300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.898334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.898522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.898553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.099 qpair failed and we were unable to recover it. 00:36:10.099 [2024-07-15 12:25:59.898698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.099 [2024-07-15 12:25:59.898729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.899027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.899059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.899319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.899350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.899489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.899520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.899659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.899690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.899808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.899839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.899974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.900005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.900198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.900241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.900447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.900479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.900647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.900679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.900811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.900842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.901094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.901125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.901332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.901363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.901546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.901576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.901726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.901757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.901875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.901905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.902110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.902141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.902347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.902380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.902572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.902603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.902743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.902774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.903031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.903062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.903213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.903254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.903478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.903509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.903781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.903812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.904002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.904032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.904166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.904196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.904375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.904445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.904619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.904652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.904850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.904882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.905078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.905109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.905265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.905297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.905445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.905476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.905669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.905699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.905889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.905920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.906068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.906107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.906326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.906358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.906564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.906595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.906736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.906766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.906980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.907011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.907184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.907214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.907508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.907539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.907742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.907772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.100 qpair failed and we were unable to recover it. 00:36:10.100 [2024-07-15 12:25:59.908039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.100 [2024-07-15 12:25:59.908069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.908277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.908308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.908511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.908542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.908669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.908700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.908957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.908987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.909249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.909280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.909474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.909505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.909634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.909666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.909857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.909891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.910152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.910183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.910410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.910442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.910596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.910628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.910894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.910925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.911131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.911162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.911357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.911389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.911509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.911541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.911679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.911710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.911899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.911930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.912064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.912095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.912297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.912334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.912467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.912498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.912630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.912661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.912849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.912880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.913070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.913100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.913299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.913330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.913614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.913645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.913915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.913946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.914239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.914271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.914396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.914427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.914637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.914667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.914802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.914832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.914974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.915004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.915196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.915235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.915388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.915420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.915613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.915643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.915765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.915796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.915932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.915963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.916163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.101 [2024-07-15 12:25:59.916194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.101 qpair failed and we were unable to recover it. 00:36:10.101 [2024-07-15 12:25:59.916339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.916371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.916508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.916540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.916798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.916829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.917043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.917074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.917279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.917311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.917514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.917545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.917675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.917706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.917910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.917941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.918151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.918186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.918403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.918435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.918642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.918673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.918816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.918847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.919056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.919087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.919292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.919325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.919481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.919512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.919716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.919746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.920027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.920058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.920191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.920222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.920365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.920410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.920619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.920650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.920837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.920869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.921008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.921038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.921264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.921334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.921546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.921580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.921802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.921835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.922037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.922068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.922217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.922260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.922392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.922423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.922703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.922734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.922921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.922952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.923210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.923250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.923500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.923531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.923722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.923753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.923879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.923911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.924178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.924208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.924403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.924443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.924642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.924672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.924794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.924824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.924956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.924987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.925191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.925221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.925422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.925453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.925571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.102 [2024-07-15 12:25:59.925602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.102 qpair failed and we were unable to recover it. 00:36:10.102 [2024-07-15 12:25:59.925729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.925761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.925898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.925927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.926076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.926107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.926308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.926341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.926476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.926507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.926692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.926722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.926929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.926960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.927157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.927188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.927448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.927480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.927736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.927767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.927975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.928005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.928197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.928237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.928363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.928393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.928535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.928566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.928767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.928798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.929082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.929113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.929299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.929330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.929524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.929554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.929699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.929730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.929866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.929897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.930106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.930137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.930340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.930372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.930582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.930614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.930873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.930904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.931101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.931132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.931425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.931456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.931593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.931624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.931760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.931790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.932068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.932099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.932215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.932257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.932454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.932485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.932667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.932698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.932821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.932852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.932986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.933023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.933177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.933206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.933494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.933525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.933673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.933704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.933922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.933952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.934088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.934119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.934308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.934340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.934495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.934525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.934662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.934693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.934896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.934926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.935081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.935112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.935242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.935274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.935553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.935584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.935727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.103 [2024-07-15 12:25:59.935757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.103 qpair failed and we were unable to recover it. 00:36:10.103 [2024-07-15 12:25:59.935915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.935946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.936202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.936253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.936465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.936496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.936689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.936720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.936894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.936925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.937122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.937152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.937409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.937441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.937576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.937606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.937882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.937913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.938171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.938202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.938479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.938510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.938716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.938747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.938999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.939029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.939333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.939365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.939632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.939663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.939872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.939902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.940095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.940126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.940279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.940310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.940564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.940594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.940913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.940943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.941151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.941182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.941329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.941359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.941652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.941682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.941881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.941912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.942042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.942073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.942326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.942358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.942486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.942522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.942732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.942764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.942951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.942982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.943282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.943314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.943501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.943533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.943671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.943703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.943919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.943949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.944204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.944255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.944531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.944561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.944710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.944742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.944872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.944902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.945111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.945142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.945281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.945312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.945439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.945469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.945676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.945706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.945902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.945931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.946051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.946081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.946343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.946374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.946513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.946543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.946692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.946723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.946864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.946894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.947077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.947107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.104 qpair failed and we were unable to recover it. 00:36:10.104 [2024-07-15 12:25:59.947236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.104 [2024-07-15 12:25:59.947268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.947484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.947516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.947717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.947747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.947955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.947986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.948244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.948275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.948537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.948568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.948773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.948802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.948997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.949028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.949293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.949326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.949475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.949506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.949795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.949826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.950082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.950112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.950318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.950350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.950560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.950590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.950796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.950827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.950981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.951012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.951143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.951173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.951313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.951345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.951542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.951579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.951725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.951757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.951880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.951911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.952112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.952142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.952410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.952442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.952576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.952607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.952739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.952770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.953050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.953081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.953268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.953300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.953435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.953466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.953765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.953796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.954073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.954103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.954310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.954341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.954543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.954573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.954836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.954868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.954991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.955021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.955288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.955319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.955597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.955629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.955762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.955792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.955926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.955956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.956245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.956277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.956530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.956561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.956747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.956777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.957053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.957084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.957366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.957397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.957585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.105 [2024-07-15 12:25:59.957616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.105 qpair failed and we were unable to recover it. 00:36:10.105 [2024-07-15 12:25:59.957823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.957854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.958005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.958036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.958246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.958277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.958469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.958499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.958638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.958669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.958801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.958831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.959019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.959050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.959250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.959282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.959477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.959507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.959663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.959693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.959910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.959941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.960125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.960156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.960408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.960439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.960662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.960694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.960917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.960957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.961107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.961138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.961324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.961357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.961482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.961513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.961764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.961795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.962005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.962036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.962243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.962275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.962415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.962446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.962585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.962615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.962819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.962850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.963037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.963068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.963269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.963300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.963486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.963516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.963710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.963741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.963866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.963896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.964119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.964149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.964339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.964371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.964489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.964520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.964739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.964769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.964900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.964931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.965124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.965154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.965409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.965441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.965591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.965621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.965808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.965839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.965981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.966013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.966156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.966186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.966369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.966400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.966669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.966701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.966919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.966949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.967085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.967115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.967265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.967297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.967500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.967531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.967753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.967784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.967970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.968000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.106 qpair failed and we were unable to recover it. 00:36:10.106 [2024-07-15 12:25:59.968131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.106 [2024-07-15 12:25:59.968162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.968418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.968450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.968582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.968612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.968800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.968831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.969047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.969078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.969341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.969372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.969572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.969602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.969754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.969784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.970005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.970036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.970249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.970281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.970421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.970452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.970664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.970695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.970954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.970985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.971118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.971148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.971411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.971442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.971696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.971727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.972005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.972036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.972242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.972274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.972462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.972493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.972634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.972666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.972822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.972853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.972982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.973012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.973218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.973272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.973462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.973493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.973627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.973657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.973847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.973877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.974082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.974113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.974312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.974344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.974535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.974567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.974686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.974716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.974886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.974916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.975053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.975084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.975344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.975375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.975601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.975636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.975776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.975807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.976084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.976114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.976305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.976336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.976485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.976515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.976770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.976800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.977001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.977031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.977240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.977271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.977526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.977556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.977769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.977799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.978054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.978084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.978297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.978328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.978523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.978553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.978756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.978786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.979069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.979100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.107 qpair failed and we were unable to recover it. 00:36:10.107 [2024-07-15 12:25:59.979352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.107 [2024-07-15 12:25:59.979383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.979638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.979668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.979854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.979884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.980032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.980063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.980362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.980393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.980601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.980631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.980829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.980859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.981118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.981148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.981271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.981303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.981499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.981529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.981719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.981750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.981947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.981977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.982133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.982163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.982375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.982408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.982679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.982709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.982896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.982926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.983130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.983161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.983379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.983410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.983692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.983723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.983920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.983949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.984096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.984127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.984315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.984346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.984543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.984573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.984758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.984789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.985041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.985071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.985326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.985363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.985501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.985532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.985718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.985748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.985884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.985914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.986064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.986095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.986375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.986407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.986686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.986717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.986989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.987020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.987206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.987257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.987410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.987440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.987573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.987605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.987810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.987840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.988053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.988083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.988236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.988268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.988528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.988559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.988835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.988866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.989017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.989048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.989188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.989219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.989441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.989473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.989726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.989756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.108 qpair failed and we were unable to recover it. 00:36:10.108 [2024-07-15 12:25:59.989894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.108 [2024-07-15 12:25:59.989925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.990111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.990141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.990403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.990436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.990648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.990678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.990801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.990830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.991083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.991114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.991341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.991371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.991503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.991535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.991677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.991707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.991909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.991940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.992158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.992188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.992414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.992444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.992564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.992594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.992854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.992885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.993037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.993067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.993198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.993236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.993435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.993466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.993669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.993699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.993895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.993926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.994116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.994146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.994402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.994440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.994679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.994709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.994896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.994927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.995214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.995252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.995507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.995537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.995721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.995751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.995958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.995989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.996287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.996318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.996514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.996544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.996766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.996796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.997054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.997084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.997287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.997318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.997515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.997546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.997764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.997793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.998052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.998083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.998305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.998337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.998548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.998579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.998804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.998834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.999034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.999065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.999253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.999284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.999493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.999523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.999729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.999759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:25:59.999911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:25:59.999941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.000133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.000164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.000358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.000390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.000641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.000672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.000896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.000927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.001061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.001091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.001393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.001425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.001656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.001687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.001827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.001856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.002006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.002037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.109 [2024-07-15 12:26:00.002247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.109 [2024-07-15 12:26:00.002279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.109 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.002475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.002505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.002723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.002754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.002891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.002923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.003131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.003162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.003426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.003458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.003713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.003744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.003951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.003982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.004260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.004297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.004450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.004481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.004688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.004718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.004845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.004875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.005007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.005037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.005287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.005319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.005448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.005479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.005753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.005783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.005970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.006000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.006290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.006321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.006470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.006500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.006655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.006685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.006942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.006972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.007171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.007201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.007367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.007399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.007533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.007564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.007766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.007796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.008050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.008081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.008217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.008255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.008384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.008415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.008547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.008578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.008830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.008860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.009082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.009112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.009250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.009282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.009535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.009566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.009767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.009798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.009943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.009973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.010107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.010138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.010323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.010355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.010490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.010520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.010666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.010697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.010847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.010878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.011000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.011031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.011311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.011342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.011539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.011570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.011766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.011796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.011938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.011969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.012190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.012220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.012424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.012455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.012709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.012739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.012926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.012961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.013081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.013111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.013269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.013301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.013488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.013518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.013739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.013770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.013918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.013949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.014159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.014190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.110 [2024-07-15 12:26:00.014330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.110 [2024-07-15 12:26:00.014361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.110 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.014495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.014526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.014781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.014811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.014973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.015005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.015165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.015196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.015460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.015491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.015696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.015728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.018338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.018478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.018795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.018833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.019008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.019040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.019174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.019205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.019365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.019397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.019545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.019576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.019725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.019756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.019968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.019998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.020136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.020167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.020385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.020417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.020706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.020737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.020946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.020976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.021205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.021263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.021468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.021499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.021687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.021718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.021921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.021953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.022084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.022117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.022354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.022386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.022681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.022712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.022903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.022934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.023137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.023167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.023327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.023359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.023583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.023614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.023750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.023780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.023965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.023996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.024134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.024165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.024306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.024344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.024534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.024565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.024771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.024802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.025001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.025032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.025252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.025284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.025539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.025573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.025716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.025746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.025959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.025990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.026245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.026277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.026534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.026564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.026822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.026853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.027048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.027078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.027200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.027251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.027377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.027408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.027612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.027644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.027898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.027930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.028105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.111 [2024-07-15 12:26:00.028136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.111 qpair failed and we were unable to recover it. 00:36:10.111 [2024-07-15 12:26:00.028363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.028395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.028619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.028650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.028859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.028889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.029102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.029133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.029346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.029378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.029519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.029549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.029691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.029721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.029928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.029959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.030096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.030126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.030274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.030306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.030588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.030619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.030765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.030796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.031008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.031038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.031179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.031210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.031392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.031423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.031619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.031650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.031851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.031882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.032079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.032109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.032333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.032364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.032552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.032583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.032783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.032812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.033090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.033121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.033388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.033420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.033615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.033651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.033868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.033898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.034086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.034117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.034272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.034302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.034496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.034526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.034803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.034834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.035040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.035070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.035203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.035255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.035475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.035505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.035702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.035733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.035885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.035916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.036110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.036141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.036339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.036372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.036624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.036655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.036911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.036942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.037232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.037263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.037465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.037495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.037748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.037778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.038058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.038088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.038210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.038251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.038532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.038563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.038840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.038870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.039129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.039159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.039265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.039296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.039523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.039554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.039831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.039861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.040089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.040119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.040355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.040387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.040645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.040675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.040873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.040903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.041105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.041135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.041391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.112 [2024-07-15 12:26:00.041422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.112 qpair failed and we were unable to recover it. 00:36:10.112 [2024-07-15 12:26:00.041695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.041725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.041891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.041922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.042073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.042103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.042256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.042293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.042530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.042560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.042692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.042722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.042858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.042889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.043042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.043072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.043329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.043366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.043515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.043545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.043735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.043765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.043972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.044003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.044297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.044328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.044614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.044644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.044948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.044978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.045125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.045156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.045304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.045335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.045523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.045553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.045870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.045901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.046105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.046135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.046333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.046363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.046641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.046671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.047000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.047032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.047291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.047323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.047616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.047646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.047937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.047967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.048154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.048183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.048342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.048373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.113 [2024-07-15 12:26:00.048594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.113 [2024-07-15 12:26:00.048625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.113 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.048911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.048942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.049093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.049125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.049379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.049411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.049560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.049591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.049918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.049948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.050217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.050255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.050467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.050498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.050750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.050780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.050971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.051002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.051202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.051239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.051449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.051479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.051715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.051744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.051888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.051918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.052067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.052097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.052351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.052383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.052598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.052628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.052830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.052860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.053113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.053144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.053345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.053377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.053652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.053688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.053878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.053909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.054190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.054222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.054383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.054414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.054627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.054656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.054861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.054892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.055221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.055260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.055580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.055611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.055763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.055795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.056083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.056115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.056379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.056410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.056746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.056776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.057015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.057046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.057324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.057356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.390 [2024-07-15 12:26:00.057658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.390 [2024-07-15 12:26:00.057689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.390 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.057995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.058027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.058278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.058311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.058591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.058622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.058756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.058786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.059061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.059091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.059234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.059265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.059471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.059500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.059705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.059734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.060052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.060082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.060282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.060313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.060569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.060599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.060798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.060828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.061034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f1b60 is same with the state(5) to be set 00:36:10.391 [2024-07-15 12:26:00.061275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.061346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.061646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.061681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.061895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.061926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.062129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.062160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.062368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.062402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.062605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.062636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.062841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.062871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.063125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.063156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.063344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.063376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.063566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.063597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.063739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.063770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.064052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.064082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.064343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.064375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.064529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.064560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.064817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.064847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.065019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.065051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.065246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.065278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.065483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.065514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.065712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.065743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.065891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.065922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.066133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.391 [2024-07-15 12:26:00.066165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.391 qpair failed and we were unable to recover it. 00:36:10.391 [2024-07-15 12:26:00.066296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.066327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.066535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.066566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.066794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.066826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.067044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.067074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.067211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.067254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.067451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.067490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.067766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.067796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.068005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.068036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.068304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.068335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.068639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.068671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.068949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.068980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.069203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.069243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.069450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.069481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.069738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.069769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.069958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.069989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.070248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.070280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.070476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.070508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.070793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.070824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.071032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.071063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.071279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.071312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.071441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.071472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.071687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.071718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.072014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.072045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.072348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.072379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.072570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.072601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.072776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.072806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.073074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.073105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.073306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.073338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.073488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.073519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.073798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.073829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.073972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.074003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.074272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.074305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.074514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.074545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.074823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.074853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.075009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.075040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.392 qpair failed and we were unable to recover it. 00:36:10.392 [2024-07-15 12:26:00.075243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.392 [2024-07-15 12:26:00.075275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.075571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.075601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.075741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.075772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.075976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.076007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.076282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.076315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.076505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.076535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.076789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.076819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.077008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.077039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.077323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.077354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.077635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.077666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.077919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.077955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.078277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.078311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.078484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.078515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.078733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.078765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.078916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.078947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.079097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.079129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.079295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.079326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.079622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.079663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.079852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.079892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.080137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.080176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.080377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.080424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.080743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.080789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.080970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.081010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.081190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.081243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.081526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.081575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.081808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.081865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.082104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.082171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.082835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.083316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.085282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.085338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.085535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.085570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.085747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.085780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.085932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.085962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.086195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.393 [2024-07-15 12:26:00.086240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.393 qpair failed and we were unable to recover it. 00:36:10.393 [2024-07-15 12:26:00.086497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.086528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.086727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.086757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.086987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.087018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.087338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.087370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.087576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.087644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.087903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.087957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.088215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.088276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.088440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.088473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.088744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.088777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.089008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.089039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.089347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.089380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.089543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.089573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.089827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.089860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.090118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.090148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.090299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.090330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.090555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.090587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.090740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.090772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.090976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.091014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.091163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.091194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.091384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.091416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.091650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.091681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.091826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.091857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.092056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.092087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.092359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.092391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.092589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.092619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.092823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.092853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.093115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.093146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.093403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.093435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.093586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.093617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.093815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.093846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.094103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.094134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.094336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.094368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.094570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.094600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.394 qpair failed and we were unable to recover it. 00:36:10.394 [2024-07-15 12:26:00.094821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.394 [2024-07-15 12:26:00.094853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.095056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.095086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.095278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.095313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.095468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.095498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.095730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.095761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.096016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.096049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.096278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.096309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.096498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.096529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.096665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.096696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.096843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.096874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.097088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.097119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.097354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.097396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.097639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.097671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.097871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.097903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.098156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.098187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.098452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.098484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.098640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.098670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.098860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.098891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.099172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.099202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.099465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.099496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.099698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.099729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.099995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.100026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.100244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.100276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.100487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.100518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.100794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.100833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.100968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.100998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.101251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.101284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.101426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.101456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.101664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.101694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.101960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.101991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.102187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.395 [2024-07-15 12:26:00.102217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.395 qpair failed and we were unable to recover it. 00:36:10.395 [2024-07-15 12:26:00.102483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.102515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.102668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.102697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.102934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.102965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.103163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.103194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.103540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.103576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.103741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.103772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.104053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.104083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.104367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.104400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.104654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.104684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.104945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.104975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.105169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.105199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.105450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.105482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.105628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.105659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.105880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.105911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.106120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.106150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.106362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.106394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.106673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.106704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.107005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.107035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.107290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.107321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.107599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.107630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.107926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.107962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.108265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.108296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.108504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.108534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.108688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.108718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.108938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.108968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.109174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.109205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.109412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.109443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.109721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.109751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.109969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.109999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.110279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.110311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.110516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.110547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.110682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.110712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.110928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.110958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.111159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.111190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.111345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.111379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.111533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.111564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.111785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.111816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.112068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.112098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.112365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.112398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.396 qpair failed and we were unable to recover it. 00:36:10.396 [2024-07-15 12:26:00.112604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.396 [2024-07-15 12:26:00.112635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.112909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.112940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.113098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.113129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.113333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.113365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.113582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.113613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.113754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.113785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.114086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.114117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.114398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.114429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.114653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.114685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.114841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.114873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.115141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.115172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.115355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.115388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.115652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.115683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.115913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.115944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.116142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.116173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.116329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.116361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.116552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.116583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.116770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.116800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.117075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.117106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.117298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.117330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.117590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.117621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.117846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.117883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.118082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.118113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.118375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.118407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.118554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.118585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.118818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.118848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.119111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.119142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.119303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.119334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.119560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.119591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.119844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.119875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.120144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.120175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.120490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.397 [2024-07-15 12:26:00.120521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.397 qpair failed and we were unable to recover it. 00:36:10.397 [2024-07-15 12:26:00.120712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.120743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.120928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.120959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.121247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.121279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.121487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.121518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.121706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.121737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.121960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.121991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.122327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.122359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.122615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.122646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.122905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.122935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.123126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.123156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.123408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.123440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.123696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.123727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.123980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.124011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.124243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.124275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.124429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.124459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.124673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.124704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.124917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.124948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.125155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.125186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.125412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.125444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.125637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.125668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.125871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.125902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.126160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.126191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.126470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.126502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.126710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.126741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.126871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.126901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.127202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.127244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.127502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.127533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.127661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.127691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.127830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.127861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.128058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.128095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.128349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.128381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.128582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.128613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.128767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.128798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.128937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.128968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.129093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.129124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.129320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.129357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.129633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.129664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.129858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.129888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.130086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.130117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.398 [2024-07-15 12:26:00.130323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.398 [2024-07-15 12:26:00.130354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.398 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.130544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.130575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.130829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.130861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.131077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.131108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.131324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.131355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.131478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.131509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.131705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.131736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.131882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.131913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.132101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.132133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.132415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.132445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.132650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.132681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.132877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.132907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.133173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.133204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.133431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.133462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.133666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.133696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.133885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.133916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.134059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.134089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.134347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.134379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.134532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.134563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.134828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.134858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.135130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.135161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.135472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.135502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.135763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.135795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.136077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.136108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.136427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.136459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.136715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.136746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.137006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.137036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.137242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.399 [2024-07-15 12:26:00.137274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.399 qpair failed and we were unable to recover it. 00:36:10.399 [2024-07-15 12:26:00.137549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.137580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.137797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.137828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.138036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.138072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.138263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.138295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.138493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.138524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.138717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.138747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.138960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.138991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.139203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.139241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.139385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.139416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.139620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.139650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.139909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.139940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.140153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.140184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.140431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.140463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.140627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.140658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.140939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.140970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.141195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.141250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.141535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.141566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.141762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.141793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.141980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.142010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.142234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.142265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.142499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.142531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.142757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.142789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.142985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.143016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.143296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.143328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.143472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.143503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.143808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.143838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.144109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.144140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.144330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.144362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.144572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.144602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.144871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.144902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.145183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.145214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.145518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.145549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.145759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.145790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.145914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.145945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.146238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.146270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.146571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.146601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.146805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.146836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.400 [2024-07-15 12:26:00.147092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.400 [2024-07-15 12:26:00.147123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.400 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.147429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.147461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.147735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.147766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.148079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.148110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.148263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.148294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.148485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.148520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.148799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.148830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.149034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.149065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.149351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.149383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.149590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.149622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.149810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.149840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.150053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.150084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.150218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.150256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.150398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.150429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.150661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.150692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.150971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.151002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.151305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.151337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.151489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.151520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.151748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.151779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.152066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.152098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.152291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.152323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.152582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.152613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.152805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.152836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.152972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.153003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.153311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.153343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.153620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.153651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.153931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.153962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.154264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.154296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.154576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.154607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.154914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.154945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.155139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.155170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.155494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.155526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.155771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.401 [2024-07-15 12:26:00.155802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.401 qpair failed and we were unable to recover it. 00:36:10.401 [2024-07-15 12:26:00.156011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.156042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.156257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.156290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.156484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.156515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.156770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.156800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.157088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.157120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.157415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.157447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.157589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.157619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.157756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.157787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.157989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.158020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.158282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.158313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.158620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.158650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.158883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.158914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.159249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.159286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.159526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.159556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.159822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.159853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.160123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.160153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.160411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.160443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.160667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.160698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.160907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.160939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.161223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.161265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.161404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.161435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.161628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.161659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.161943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.161974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.162190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.162221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.162427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.162460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.162650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.162680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.162943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.162974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.163260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.163295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.163522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.163553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.163827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.163859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.164140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.164172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.164472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.164505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.164716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.164746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.165054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.165085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.165308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.165342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.165628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.165659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.165946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.165977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.166260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.166291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.402 [2024-07-15 12:26:00.166486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.402 [2024-07-15 12:26:00.166516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.402 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.166669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.166701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.166932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.166962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.167244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.167277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.167478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.167510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.167765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.167796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.168050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.168081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.168342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.168374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.168646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.168677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.168866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.168898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.169127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.169159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.169366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.169398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.169626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.169657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.169800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.169831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.170051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.170086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.170346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.170377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.170684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.170715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.170994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.171025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.171334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.171366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.171641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.171672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.171978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.172009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.172291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.172324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.172515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.172546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.172805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.172836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.172976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.173007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.173291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.173322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.173478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.173509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.173713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.173744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.173973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.174004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.174133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.174162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.174468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.174500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.174783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.174815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.175015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.175045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.175273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.175305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.175566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.175597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.175916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.175948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.176213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.176255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.176472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.176503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.176695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.176726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.176988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.177019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.403 qpair failed and we were unable to recover it. 00:36:10.403 [2024-07-15 12:26:00.177243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.403 [2024-07-15 12:26:00.177276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.177552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.177584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.177724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.177755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.178034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.178065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.178333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.178366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.178673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.178704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.178978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.179008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.179218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.179258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.179405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.179437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.179731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.179762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.180062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.180093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.180379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.180412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.180672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.180703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.180947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.180978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.181174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.181211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.181498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.181529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.181787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.181819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.182022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.182053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.182333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.182365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.182577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.182609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.182894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.182925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.183155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.183186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.183468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.183499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.183691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.183722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.184003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.184034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.184258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.184290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.184581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.184613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.184838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.184868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.185115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.185164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.185377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.185410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.185673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.185705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.186000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.186032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.186319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.186351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.186568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.186599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.186754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.404 [2024-07-15 12:26:00.186785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.404 qpair failed and we were unable to recover it. 00:36:10.404 [2024-07-15 12:26:00.186995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.187026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.187241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.187273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.187564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.187595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.187880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.187910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.188169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.188200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.188363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.188396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.188594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.188625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.188899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.188931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.189123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.189154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.189413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.189446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.189640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.189684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.189945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.189976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.190120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.190152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.190345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.190377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.190641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.190672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.190881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.190912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.191200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.191258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.191530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.191561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.191850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.191881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.192007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.192042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.192328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.192361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.192580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.192613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.192770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.192801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.193083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.193115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.193375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.193407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.193687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.193718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.193867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.193899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.194102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.194133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.194345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.194377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.194581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.194611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.194894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.194925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.195172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.195203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.195447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.195480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.195719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.195751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.195947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.195980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.196190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.196222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.196517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.196549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.196766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.196797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.197013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.197043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.197308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.197340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.405 [2024-07-15 12:26:00.197648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.405 [2024-07-15 12:26:00.197680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.405 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.197952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.197983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.198296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.198328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.198581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.198613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.198873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.198905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.199107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.199139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.199346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.199379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.199654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.199685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.200012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.200043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.200261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.200294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.200557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.200587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.200820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.200852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.201136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.201168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.201376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.201408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.201654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.201685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.201898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.201930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.202211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.202255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.202549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.202581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.202814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.202845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.203135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.203171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.203462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.203495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.203787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.203818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.204111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.204142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.204366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.204399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.204714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.204746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.204966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.204997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.205192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.205223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.205497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.205528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.205828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.205860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.206148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.206179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.206457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.206494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.206766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.206798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.207026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.207058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.207348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.207381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.207578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.207609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.207868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.207900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.208119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.208150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.208362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.208397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.208689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.208719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.208983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.209015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.209252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.406 [2024-07-15 12:26:00.209285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.406 qpair failed and we were unable to recover it. 00:36:10.406 [2024-07-15 12:26:00.209571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.209602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.209802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.209833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.210047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.210080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.210297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.210329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.210541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.210572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.210851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.210926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.211239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.211277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.211571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.211603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.211807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.211840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.212099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.212131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.212353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.212387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.212656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.212688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.213001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.213034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.213251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.213285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.213574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.213606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.213897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.213930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.214151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.214183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.214419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.214452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.214591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.214631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.214833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.214865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.215088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.215121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.215354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.215386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.215546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.215579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.215866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.215898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.216181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.216213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.216514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.216547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.216765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.216796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.217110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.217141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.217316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.217349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.217567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.217599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.217806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.217838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.218054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.218086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.218380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.218414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.218707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.218739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.219050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.219081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.219360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.219393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.219660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.219691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.220007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.220038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.220251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.220284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.220575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.220608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.407 [2024-07-15 12:26:00.220818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.407 [2024-07-15 12:26:00.220850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.407 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.221051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.221083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.221278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.221311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.221607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.221640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.221930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.221963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.222195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.222247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.222473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.222505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.222774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.222806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.223005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.223036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.223317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.223351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.223594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.223626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.223841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.223874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.224167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.224199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.224409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.224441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.224608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.224639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.224863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.224895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.225185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.225218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.225458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.225491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.225706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.225743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.226009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.226041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.226275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.226309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.226581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.226614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.226913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.226945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.227116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.227147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.227293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.227328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.227569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.227600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.227798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.227830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.228146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.228178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.228478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.228512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.228802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.228834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.229053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.229084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.229301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.229335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.229545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.229577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.229894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.229926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.230089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.230121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.230409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.408 [2024-07-15 12:26:00.230444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.408 qpair failed and we were unable to recover it. 00:36:10.408 [2024-07-15 12:26:00.230681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.230713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.231004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.231035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.231251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.231286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.231506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.231538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.231762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.231794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.232094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.232125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.232418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.232451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.232668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.232700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.232996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.233028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.233256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.233291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.233574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.233606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.233910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.233942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.234246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.234280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.234560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.234592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.234888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.234921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.235166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.235198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.235463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.235497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.235766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.235798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.236114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.236145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.236291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.236324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.236528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.236560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.236851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.236884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.237118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.237155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.237393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.237427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.237645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.237677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.237976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.238008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.238297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.238330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.238625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.238657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.238803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.238835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.238992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.239025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.239251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.239283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.239505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.239538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.239702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.239735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.240025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.240057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.240261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.240295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.240503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.240536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.240815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.240846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.241138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.241171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.241392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.241425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.241679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.241711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.409 [2024-07-15 12:26:00.242004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.409 [2024-07-15 12:26:00.242036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.409 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.242332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.242366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.242587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.242620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.242891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.242924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.243175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.243207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.243422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.243454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.243729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.243762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.243904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.243936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.244136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.244168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.244514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.244553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.244839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.244872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.245168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.245200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.245480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.245512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.245688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.245720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.245935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.245967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.246175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.246206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.246470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.246503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.246753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.246785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.247080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.247112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.247357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.247390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.247604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.247636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.247942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.247974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.248205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.248245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.248447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.248480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.248780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.248812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.248991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.249024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.249304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.249338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.249582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.249613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.249870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.249901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.250063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.250096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.250282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.250315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.250521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.250553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.250773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.250806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.251005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.251038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.251357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.251391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.251681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.251714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.251953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.251986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.252210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.252252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.252482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.252514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.252817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.410 [2024-07-15 12:26:00.252849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.410 qpair failed and we were unable to recover it. 00:36:10.410 [2024-07-15 12:26:00.253064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.253096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.253332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.253364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.253566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.253599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.253771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.253803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.254007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.254038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.254374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.254408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.254705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.254736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.254994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.255026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.255342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.255376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.255591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.255627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.255947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.255979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.256236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.256269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.256422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.256454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.256779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.256811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.257015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.257047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.257349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.257381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.257594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.257627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.257875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.257907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.258137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.258169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.258380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.258412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.258612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.258644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.258866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.258898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.259166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.259199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.259393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.259425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.259644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.259676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.259977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.260010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.260302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.260335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.260555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.260587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.260814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.260845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.261117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.261149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.261370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.261403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.261608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.261639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.261873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.261905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.262117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.262150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.262362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.262395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.262643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.262675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.263009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.263041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.263290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.263323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.263573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.263606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.263814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.263846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.411 [2024-07-15 12:26:00.264055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.411 [2024-07-15 12:26:00.264087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.411 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.264288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.264321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.264595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.264628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.264851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.264883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.265103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.265134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.265392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.265425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.265676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.265709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.265881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.265913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.266185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.266216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.266391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.266429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.266574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.266606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.266877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.266909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.267235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.267268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.267561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.267593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.267883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.267915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.268053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.268085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.268308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.268343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.268636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.268668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.268814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.268846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.269116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.269148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.269393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.269426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.269721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.269753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.269969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.270002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.270208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.270263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.270489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.270520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.270736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.270767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.271060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.271092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.271332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.271366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.271569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.271601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.271755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.271786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.272000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.272031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.272375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.272408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.272650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.272682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.272909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.272941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.273176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.273209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.273501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.273533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.273758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.273789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.274088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.274119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.274398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.274432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.274732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.274764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.275077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.275110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.275398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.275432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.275660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.275692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.412 qpair failed and we were unable to recover it. 00:36:10.412 [2024-07-15 12:26:00.275991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.412 [2024-07-15 12:26:00.276024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.276296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.276329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.276532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.276565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.276790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.276822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.277063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.277094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.277350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.277383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.277709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.277747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.277891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.277924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.278136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.278169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.278413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.278447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.278601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.278633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.278839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.278872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.279176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.279209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.279519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.279552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.279771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.279803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.280008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.280040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.280313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.280347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.280574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.280605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.280823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.280855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.281145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.281177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.281426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.281459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.281782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.281814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.282018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.282050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.282321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.282354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.282564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.282597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.282819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.282850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.283055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.283087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.283380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.283413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.283579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.283610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.283913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.283946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.284241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.284274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.284566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.284599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.284900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.284932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.285070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.285102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.285383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.285417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.285713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.285745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.286018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.286049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.286356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.286390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.286542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.413 [2024-07-15 12:26:00.286575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.413 qpair failed and we were unable to recover it. 00:36:10.413 [2024-07-15 12:26:00.286866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.286899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.287212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.287254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.287554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.287586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.287791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.287823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.288024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.288056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.288326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.288360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.288661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.288692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.288909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.288952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.289254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.289288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.289461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.289493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.289708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.289740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.289938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.289971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.290186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.290217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.290477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.290511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.290786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.290818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.291028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.291060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.291214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.291270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.291512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.291545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.291824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.291856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.292158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.292189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.292499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.292531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.292767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.292800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.293068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.293100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.293400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.293434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.293739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.293771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.294010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.294042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.294261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.294294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.294594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.294626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.294856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.294888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.295182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.295214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.295518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.295550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.295765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.295797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.296067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.296099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.296339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.296371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.296621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.296654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.296871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.296902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.297103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.297135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.297333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.297367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.297574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.297606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.297896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.297928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.298058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.298090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.298368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.298401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.298604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.414 [2024-07-15 12:26:00.298636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.414 qpair failed and we were unable to recover it. 00:36:10.414 [2024-07-15 12:26:00.298907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.298939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.299139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.299170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.299384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.299418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.299687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.299718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.299935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.299973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.300109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.300141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.300305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.300338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.300560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.300591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.300889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.300922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.301214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.301258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.301576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.301608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.301822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.301855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.302154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.302185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.302476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.302510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.302804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.302836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.303130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.303161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.303466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.303500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.303803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.303835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.304012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.304045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.304250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.304283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.304513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.304545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.304842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.304875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.305093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.305126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.305406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.305440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.305736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.305768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.306058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.306091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.306313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.306347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.306660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.306692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.306916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.306948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.307219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.307260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.307502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.307534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.307755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.307786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.308080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.308113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.308252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.308285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.308510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.308543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.308822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.308853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.309124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.309155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.309482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.309515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.309816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.309848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.310154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.310185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.310369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.310402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.310680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.310712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.310983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.311015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.311334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.415 [2024-07-15 12:26:00.311367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.415 qpair failed and we were unable to recover it. 00:36:10.415 [2024-07-15 12:26:00.311604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.311641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.311983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.312015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.312246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.312280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.312437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.312469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.312705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.312738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.312968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.312999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.313280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.313313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.313613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.313644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.313888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.313921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.314144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.314175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.314494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.314526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.314799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.314831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.315130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.315162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.315328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.315362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.315586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.315618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.315896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.315928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.316074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.316107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.316322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.316356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.316637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.316669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.316895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.316926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.317167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.317198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.317437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.317469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.317706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.317738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.318011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.318042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.318203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.318259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.318492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.318524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.318749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.318780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.319065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.319097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.319333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.319367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.319686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.319719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.319932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.319963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.320188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.320220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.320461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.320494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.320692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.320724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.320980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.321013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.321249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.321281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.321571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.321603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.321819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.321851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.322156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.322187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.322409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.322442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.322735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.322773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.323097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.323129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.323381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.323414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.323571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.416 [2024-07-15 12:26:00.323603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.416 qpair failed and we were unable to recover it. 00:36:10.416 [2024-07-15 12:26:00.323878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.323910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.324074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.324107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.324412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.324445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.324677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.324709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.324980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.325012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.325216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.325276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.325556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.325588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.325869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.325901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.326189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.326223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.326500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.326532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.326798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.326831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.327065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.327097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.327405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.327438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.327723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.327756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.327976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.328008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.328244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.328278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.328555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.328588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.328911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.328944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.329146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.329177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.329529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.329563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.329822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.329853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.330144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.330176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.330425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.330459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.330693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.330726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.331012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.331045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.331267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.331300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.331473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.331506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.331726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.331759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.331925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.331957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.332179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.332211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.332527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.332560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.332860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.332894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.333208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.333252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.333485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.333516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.333730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.333761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.333978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.334009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.334282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.334320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.334558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.334590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.417 [2024-07-15 12:26:00.334793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.417 [2024-07-15 12:26:00.334825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.417 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.334971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.335003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.335243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.335278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.335427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.335460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.335675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.335709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.335854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.335886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.336204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.336249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.336475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.336508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.336642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.336674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.336883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.336915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.337189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.337221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.337384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.337417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.337698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.337730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.338047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.338080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.338248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.338283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.338492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.338524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.338820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.338852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.339079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.339111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.339320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.339354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.339580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.339613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.339788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.339820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.340094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.340126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.340416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.340450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.340690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.340722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.340889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.340923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.341266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.341301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.341578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.341612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.341939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.341972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.342276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.342309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.342479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.342511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.342714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.342746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.343042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.343074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.343372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.343405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.343559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.343592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.343728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.343761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.343980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.344012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.344254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.344287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.344495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.344528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.344768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.344811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.345128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.345160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.345409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.345442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.345657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.345690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.345818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.345850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.346017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.346050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.346278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.346312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.346533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.346565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.346779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.418 [2024-07-15 12:26:00.346812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.418 qpair failed and we were unable to recover it. 00:36:10.418 [2024-07-15 12:26:00.347024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.347057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.347203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.347258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.347550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.347583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.347787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.347820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.348068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.348100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.348409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.348443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.348594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.348627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.348837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.348870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.349072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.349104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.349347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.349380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.349654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.349686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.349999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.350031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.350244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.350278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.350478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.350510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.350803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.350835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.351105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.351137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.351426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.351460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.351692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.351724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.352074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.352151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.352467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.352507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.352665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.352699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.352942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.352974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.353193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.353236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.353512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.353546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.353690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.353723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.353933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.353967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.354171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.354204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.354371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.354404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.354631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.354664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.354917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.354950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.355112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.355147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.355292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.355336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.355626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.355660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.355863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.355896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.356033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.356066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.356235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.356268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.356394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.356426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.356650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.356683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.356894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.356927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.357197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.357238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.357536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.357569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.357738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.357772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.357931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.357965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.358245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.358278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.358425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.358459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.358690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.358723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.359007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.359040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.419 qpair failed and we were unable to recover it. 00:36:10.419 [2024-07-15 12:26:00.359256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.419 [2024-07-15 12:26:00.359291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.359445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.359478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.359709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.359742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.360039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.360073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.360273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.360307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.360552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.360584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.360754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.360787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.360995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.361028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.361238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.361272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.361543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.361576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.361718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.361750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.361899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.361932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.362150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.362184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.362367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.362401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.362695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.362727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.363028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.363062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.363267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.363313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.363457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.363491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.363724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.363757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.363980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.364014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.364200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.364241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.364425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.364457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.364677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.364710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.364919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.364953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.365181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.365221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.365549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.365583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.365804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.365837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.366067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.366100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.366260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.366294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.366602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.366636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.366796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.366831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.367048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.367081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.367370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.367404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.367575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.367609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.367739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.367773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.367976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.368008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.368148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.368182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.368351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.368385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.368613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.368646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.368869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.368902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.369128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.369162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.369329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.369363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.369510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.369542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.369750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.420 [2024-07-15 12:26:00.369782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.420 qpair failed and we were unable to recover it. 00:36:10.420 [2024-07-15 12:26:00.369926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-07-15 12:26:00.369959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-07-15 12:26:00.370098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-07-15 12:26:00.370130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-07-15 12:26:00.370294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-07-15 12:26:00.370328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.421 [2024-07-15 12:26:00.370551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.421 [2024-07-15 12:26:00.370583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.421 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.370739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.370771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.371089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.371124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.371346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.371381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.371541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.371580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.371822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.371854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.372076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.372109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.372336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.372368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.372589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.372621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.372890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.372922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.373148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.373180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.373353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.373386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.373590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.373622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.373859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.373890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.374185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.374218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.374560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.374593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.374773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.374805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.374949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.374987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.375193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.375237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.375393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.375426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.375623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.375656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.375818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.375850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.376116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.376149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.376352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.376385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.376610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.376642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.376799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.376831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.377051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.377082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.377383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.377416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.377556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.377589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.377862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.377895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.378103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.378135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.378421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.378454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.378642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.378675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.378886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.378918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.379117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.379150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.379353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.379387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.379538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.379571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.379836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.379869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.380022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.380054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.380343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.380376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.380574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.380606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.380747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.380779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.380995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.381026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.381322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.381355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.381580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.381614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.381838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.381870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.382080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.382113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.382325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.382358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.382558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.382590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.382732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.382764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.382950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.382983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.383136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.383169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.383391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.383423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.383632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.383664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.383863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.383895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.384100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.384132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.384379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.384412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.384632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.384664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.384886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.384919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.385052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.385085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.385313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.385345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.697 qpair failed and we were unable to recover it. 00:36:10.697 [2024-07-15 12:26:00.385558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.697 [2024-07-15 12:26:00.385589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.385800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.385832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.386032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.386065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.386207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.386257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.386463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.386495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.386638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.386670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.386939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.386971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.387181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.387212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.387417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.387449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.387595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.387627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.387802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.387834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.388033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.388064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.388194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.388235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.388469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.388501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.388772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.388804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.388947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.388979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.389126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.389157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.389369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.389403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.389573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.389605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.389840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.389872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.390063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.390096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.390307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.390339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.390534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.390567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.390712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.390748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.390901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.390933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.391132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.391163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.391385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.391417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.391625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.391657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.391795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.391827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.391954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.391986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.392298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.392330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.392619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.392650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.392858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.392890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.393036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.393069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.393379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.393411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.393558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.393591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.393750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.393782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.394009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.394042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.394308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.394342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.394538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.394570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.394848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.394880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.395147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.395178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.395469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.395502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.395650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.395682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.395892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.395924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.396204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.396242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.396509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.396541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.396805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.396837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.397078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.397110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.397408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.397441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.397744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.397776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.397981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.398012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.398300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.398332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.398549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.398582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.398877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.398909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.399118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.399150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.399415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.399449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.399645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.399678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.399959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.399990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.400297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.400330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.400560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.400592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.400879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.400911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.401116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.401149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.401309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.401347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.401637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.401669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.401871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.401903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.698 qpair failed and we were unable to recover it. 00:36:10.698 [2024-07-15 12:26:00.402114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.698 [2024-07-15 12:26:00.402145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.402362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.402394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.402598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.402630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.402935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.402967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.403166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.403197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.403405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.403438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.403695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.403727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.403933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.403965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.404169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.404201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.404425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.404457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.404765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.404797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.405034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.405066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.405369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.405402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.405684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.405716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.405932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.405963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.406182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.406215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.406446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.406483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.406699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.406731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.407015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.407047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.407255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.407289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.407575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.407606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.407904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.407936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.408211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.408251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.408546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.408578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.408738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.408770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.409004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.409035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.409322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.409355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.409581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.409614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.409841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.409873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.410103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.410135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.410421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.410454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.410654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.410686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.410963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.410995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.411215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.411257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.411513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.411546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.411837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.411870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.412084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.412115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.412326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.412365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.412583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.412614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.412928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.412959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.413171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.413203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.413416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.413449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.413733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.413765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.413931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.413962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.414108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.414141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.414361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.414394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.414623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.414654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.414889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.414921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.415204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.415248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.415453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.415496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.415751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.415798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.416005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.416048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.416291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.416337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.416606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.416644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.416926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.416960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.417197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.417245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.417528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.417560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.417852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.417885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.418079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.418112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.418320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.418354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.418616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.418649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.699 [2024-07-15 12:26:00.418949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.699 [2024-07-15 12:26:00.418983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.699 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.419214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.419259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.419467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.419499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.419794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.419827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.420061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.420092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.420320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.420354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.420648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.420681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.420959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.420992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.421265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.421299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.421529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.421561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.421804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.421837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.422077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.422110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.422279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.422313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.422556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.422589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.422888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.422921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.423207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.423253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.423547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.423584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.423856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.423888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.424116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.424149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.424444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.424478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.424719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.424751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.425048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.425081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.425379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.425413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.425731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.425762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.426065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.426098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.426308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.426344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.426587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.426619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.426791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.426824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.427028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.427061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.427286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.427320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.427579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.427612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.427911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.427943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.428180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.428213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.428548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.428582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.428800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.428832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.429130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.429162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.429458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.429493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.429770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.429802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.429955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.429988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.430190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.430223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.430587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.430623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.430859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.430891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.431192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.431239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.431409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.431443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.431604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.431636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.431917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.431949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.432096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.432129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.432400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.432435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.432732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.432764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.432995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.433029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.433311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.433345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.433649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.433682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.433855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.433887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.434043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.434075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.434279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.434314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.434537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.434570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.434843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.434880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.435174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.435208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.435452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.435486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.435686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.435718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.436026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.436058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.436332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.436367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.436604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.436636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.436933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.436965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.437187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.437220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.437453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.437485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.437714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.437748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.437966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.437998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.700 [2024-07-15 12:26:00.438271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.700 [2024-07-15 12:26:00.438305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.700 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.438522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.438554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.438696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.438728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.438935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.438968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.439266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.439300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.439573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.439606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.439839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.439883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.440191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.440255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.440442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.440481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.440780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.440824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.440977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.441010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.441332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.441367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.441528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.441561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.441816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.441849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.442140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.442171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.442399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.442434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.442667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.442700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.442931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.442964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.443195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.443238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.443464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.443496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.443768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.443800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.444076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.444109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.444424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.444458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.444751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.444784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.445076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.445109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.445412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.445446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.445647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.445679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.445973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.446006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.446153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.446191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.446502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.446537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.446752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.446784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.447062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.447094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.447416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.447450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.447674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.447707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.447911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.447944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.448248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.448281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.448580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.448613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.448927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.448960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.449092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.449124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.449422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.449456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.449735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.449767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.450000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.450033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.450250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.450283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.450503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.450536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.450736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.450768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.451037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.451069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.451349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.451383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.451687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.451720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.452004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.452035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.452261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.452296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.452516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.452548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.452766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.452801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.453019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.453051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.453200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.453245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.453548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.453581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.453838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.453872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.454085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.454117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.454495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.454531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.454773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.454806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.454969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.455002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.455304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.455339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.455620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.455653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.455949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.455982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.456271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.456306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.456602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.701 [2024-07-15 12:26:00.456636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.701 qpair failed and we were unable to recover it. 00:36:10.701 [2024-07-15 12:26:00.456926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.456960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.457247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.457282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.457580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.457613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.457859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.457897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.458263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.458300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.458599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.458632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.458945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.458976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.459296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.459332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.459549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.459581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.459786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.459818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.460116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.460149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.460375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.460409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.460637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.460669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.460947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.460979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.461286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.461319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.461549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.461581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.461832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.461864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.462090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.462123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.462351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.462385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.462682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.462715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.462996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.463028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.463253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.463287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.463531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.463568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.463798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.463831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.464137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.464170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.464596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.464631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.464838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.464871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.465106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.465139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.465444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.465478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.465711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.465744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.466091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.466123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.466369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.466405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.466703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.466736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.466943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.466976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.467253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.467288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.467569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.467602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.467907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.467940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.468222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.468268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.468563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.468595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.468876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.468908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.469149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.469181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.469527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.469562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.469801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.469834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.470150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.470188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.470518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.470553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.470758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.470790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.471065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.471096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.471398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.471432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.471721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.471754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.471965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.471998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.472150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.472182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.472437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.472471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.472700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.472732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.473017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.473050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.702 [2024-07-15 12:26:00.473258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.702 [2024-07-15 12:26:00.473293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.702 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.473517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.473550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.473846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.473882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.474039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.474072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.474311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.474344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.474617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.474649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.474881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.474914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.475212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.475260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.475517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.475548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.475774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.475806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.476100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.476133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.476448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.476482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.476707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.476739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.477030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.477062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.477310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.477344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.477554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.477586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.477918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.477952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.478268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.478303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.478623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.478656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.478931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.478963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.479265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.479299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.479581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.479613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.479912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.479943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.480205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.480248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.480520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.480552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.480795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.480827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.481050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.481082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.481372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.481405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.481703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.481735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.481962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.481999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.482150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.482183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.482406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.482439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.482639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.482672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.482973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.483004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.483278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.483312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.483516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.483548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.483797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.483828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.484130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.484162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.484376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.484408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.484722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.484754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.485064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.485104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.485367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.485401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.485706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.485740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.486021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.486052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.486253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.486287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.486558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.486590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.486891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.486923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.487211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.487252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.487541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.487573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.487794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.487827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.488117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.488148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.488369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.488401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.488703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.488740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.488955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.488988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.489275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.489309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.489591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.489623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.489904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.489937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.490184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.490217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.490479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.490512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.490837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.490868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.491069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.491100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.491417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.491451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.491743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.491775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.491994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.492026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.492345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.492378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.492598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.703 [2024-07-15 12:26:00.492631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.703 qpair failed and we were unable to recover it. 00:36:10.703 [2024-07-15 12:26:00.492834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.492866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.493134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.493166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.493464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.493498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.493792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.493830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.494102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.494134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.494441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.494475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.494705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.494738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.495011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.495043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.495248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.495281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.495553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.495585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.495790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.495823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.496121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.496152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.496302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.496335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.496485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.496517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.496765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.496798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.497042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.497077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.497366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.497399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.497700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.497733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.497962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.497996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.498145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.498176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.498425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.498458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.498683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.498715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.498933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.498965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.499267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.499301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.499590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.499622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.499905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.499936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.500221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.500265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.500578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.500610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.500822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.500854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.501149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.501182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.501500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.501534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.501833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.501866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.502155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.502188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.502513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.502546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.502816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.502849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.503093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.503124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.503397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.503432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.503704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.503737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.503968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.504000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.504218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.504261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.504491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.504523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.504664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.504696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.504917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.504950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.505220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.505281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.505604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.505636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.505916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.505947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.506105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.506137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.506292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.506325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.506633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.506666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.506960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.506992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.507215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.507260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.507479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.507511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.507736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.507769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.507923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.507956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.508157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.508189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.508409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.508442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.508722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.508755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.508973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.509005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.509298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.509333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.509547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.509580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.509848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.509880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.510201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.510255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.510538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.510569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.510856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.704 [2024-07-15 12:26:00.510887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.704 qpair failed and we were unable to recover it. 00:36:10.704 [2024-07-15 12:26:00.511112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.511142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.511383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.511415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.511724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.511755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.511976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.512006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.512255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.512286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.512500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.512530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.512879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.512955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.513260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.513333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.513628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.513661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.513938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.513970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.514171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.514202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.514417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.514448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.514603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.514633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.514794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.514825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.515117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.515147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.515307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.515337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.515642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.515673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.515905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.515936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.516243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.516273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.516487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.516528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.516738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.516768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.516899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.516929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.517143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.517174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.517458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.517489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.517638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.517669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.517946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.517976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.518265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.518298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.518575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.518607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.518904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.518934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.519176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.519207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.519541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.519573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.519815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.519845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.519998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.520028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.520305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.520337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.520648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.520679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.520897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.520927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.521136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.521167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.521479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.521511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.521803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.521833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.522073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.522103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.522396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.522427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.522650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.522681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.522965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.522995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.523314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.523346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.523642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.523672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.523922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.523952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.524250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.524282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.524574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.524605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.524874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.524905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.525249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.525281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.525574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.525604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.525829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.525860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.526174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.526204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.526397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.526428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.526722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.526752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.527071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.527101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.527428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.527460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.527685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.527715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.528011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.528043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.528208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.528246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.528422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.528453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.705 qpair failed and we were unable to recover it. 00:36:10.705 [2024-07-15 12:26:00.528701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.705 [2024-07-15 12:26:00.528732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.529001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.529031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.529331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.529363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.529611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.529642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.529872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.529902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.530236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.530267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.530486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.530516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.530823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.530853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.531070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.531100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.531314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.531346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.531596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.531626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.531922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.531953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.532173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.532204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.532494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.532526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.532742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.532772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.533060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.533091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.533333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.533365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.533635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.533665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.533898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.533929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.534166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.534198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.534439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.534470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.534768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.534799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.534946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.534977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.535193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.535222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.535373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.535404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.535613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.535648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.535969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.536000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.536293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.536324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.536524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.536555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.536733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.536765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.537120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.537151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.537375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.537408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.537607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.537637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.537870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.537900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.538160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.538191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.538473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.538505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.538751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.538782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.539062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.539092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.539322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.539354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.539655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.539685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.539988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.540019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.540320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.540352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.540595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.540626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.540840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.540871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.541119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.541149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.541444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.541475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.541698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.541728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.542002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.542032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.542198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.542253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.542471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.542502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.542715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.542746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.542983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.543013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.543325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.543356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.543512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.543542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.543764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.543794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.544098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.544129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.544417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.544448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.544663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.544693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.544997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.545029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.545323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.545355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.545519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.545550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.545848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.545878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.546168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.546199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.546447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.546479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.546748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.546777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.547015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.547051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.547324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.547355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.547504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.706 [2024-07-15 12:26:00.547534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.706 qpair failed and we were unable to recover it. 00:36:10.706 [2024-07-15 12:26:00.547806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.547837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.548040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.548071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.548350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.548381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.548619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.548649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.548818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.548849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.549066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.549098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.549326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.549358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.549534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.549564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.549778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.549808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.550043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.550075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.550276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.550307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.550537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.550567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.550809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.550839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.551041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.551071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.551342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.551374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.551544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.551576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.551797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.551827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.552098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.552128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.552365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.552397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.552619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.552650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.552847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.552878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.553017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.553047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.553339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.553371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.553588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.553619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.553825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.553856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.554081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.554111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.554384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.554416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.554565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.554595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.554818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.554848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.555149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.555179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.555363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.555394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.555617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.555648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.555867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.555898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.556167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.556198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.556371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.556403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.556537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.556567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.556782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.556813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.557045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.557082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.557286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.557318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.557530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.557561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.557781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.557811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.558054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.558085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.558313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.558345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.558514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.558545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.558748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.558778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.559102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.559133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.559395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.559426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.559690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.559721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.560034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.560065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.560342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.560374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.560598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.560629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.560816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.560847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.561171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.561202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.561525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.561555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.561778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.561809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.562074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.562104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.562382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.562415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.562662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.562692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.707 [2024-07-15 12:26:00.562915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.707 [2024-07-15 12:26:00.562944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.707 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.563166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.563197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.563367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.563398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.563667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.563697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.564018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.564048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.564272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.564303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.564480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.564511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.564724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.564755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.565059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.565090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.565393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.565424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.565647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.565678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.565892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.565922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.566274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.566306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.566583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.566615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.566785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.566815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.567026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.567057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.567370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.567400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.567628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.567659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.567983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.568014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.568210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.568256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.568479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.568510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.568749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.568779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.569020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.569050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.569271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.569302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.569479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.569510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.569780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.569810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.570079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.570109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.570349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.570381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.570603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.570633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.570822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.570853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.571182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.571213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.571495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.571526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.571808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.571837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.572149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.572179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.572402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.572434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.572636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.572666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.572810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.572841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.573006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.573037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.573323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.573354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.573527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.573557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.573844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.573875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.574165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.574195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.574500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.574532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.574769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.574799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.575017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.575047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.575214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.575260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.575441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.575473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.575874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.575904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.576116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.576146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.576361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.576393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.576687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.576717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.577053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.577083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.577416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.577448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.577749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.577779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.578023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.578053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.578373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.578404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.578601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.578631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.578845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.578876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.579034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.579065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.579351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.579389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.579589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.579619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.579859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.579889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.580037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.580067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.580295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.580327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.580648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.580678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.580975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.581004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.581302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.708 [2024-07-15 12:26:00.581333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.708 qpair failed and we were unable to recover it. 00:36:10.708 [2024-07-15 12:26:00.581555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.581586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.581911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.581942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.582165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.582196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.582366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.582397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.582620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.582650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.582873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.582903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.583207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.583249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.583515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.583546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.583782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.583812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.584122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.584153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.584307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.584340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.584567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.584597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.584798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.584828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.585100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.585130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.585266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.585297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.585512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.585543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.585922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.585954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.586235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.586267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.586481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.586512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.586743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.586773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.587066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.587095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.587374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.587405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.587625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.587655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.587856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.587887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.588099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.588129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.588340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.588372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.588665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.588695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.588948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.588978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.589272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.589303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.589572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.589602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.589851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.589881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.590173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.590204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.590453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.590490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.590778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.590808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.591106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.591137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.591411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.591443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.591660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.591691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.591909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.591938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.592109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.592140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.592395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.592427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.592742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.592772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.593056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.593086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.593408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.593439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.593662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.593692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.593868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.593898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.594199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.594257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.594469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.594500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.594660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.594691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.594921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.594952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.595168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.595199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.595447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.595478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.595698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.595729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.596065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.596095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.596266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.596298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.596456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.596486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.596651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.596681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.596843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.596873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.597101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.597132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.597423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.597455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.597690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.597721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.597980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.598010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.598304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.598336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.598652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.598683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.598863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.598893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.599055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.599085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.599245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.599276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.709 [2024-07-15 12:26:00.599570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.709 [2024-07-15 12:26:00.599600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.709 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.599875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.599906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.600107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.600138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.600351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.600384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.600631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.600662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.600835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.600866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.601199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.601247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.601403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.601434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.601655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.601685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.601947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.601976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.602187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.602217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.602542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.602572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.602782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.602812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.602986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.603015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.603244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.603275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.603474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.603504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.603704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.603736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.603960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.603991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.604310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.604343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.604602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.604632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.604884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.604914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.605259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.605290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.605497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.605526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.605744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.605775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.605994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.606023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.606298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.606329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.606490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.606521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.606685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.606715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.607021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.607050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.607276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.607307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.607625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.607655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.607809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.607840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.608122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.608152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.608443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.608476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.608643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.608674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.608898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.608928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.609197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.609238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.609512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.609542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.609866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.609897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.610167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.610197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.610456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.610488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.610784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.610816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.610971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.611001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.611295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.611325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.611570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.611600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.611893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.611923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.612152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.612189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.612407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.612438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.612605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.612636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.612860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.612890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.613056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.613085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.613373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.613405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.613577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.613607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.613769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.613799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.614014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.710 [2024-07-15 12:26:00.614044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.710 qpair failed and we were unable to recover it. 00:36:10.710 [2024-07-15 12:26:00.614273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.614307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.614530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.614561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.614830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.614861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.615206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.615246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.615538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.615568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.615801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.615832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.616029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.616059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.616310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.616343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.616543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.616574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.616716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.616747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.616965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.616996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.617269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.617300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.617494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.617524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.617734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.617764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.618001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.618031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.618263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.618294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.618555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.618586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.618897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.618928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.619243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.619276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.619512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.619543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.619814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.619843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.620119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.620149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.620451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.620483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.620701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.620731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.620960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.620991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.621201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.621248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.621471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.621501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.621721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.621752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.622043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.622073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.622301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.622333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.622576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.622607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.622819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.622855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.623133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.623164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.623297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.623328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.623501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.623531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.623756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.623788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.624097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.624127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.624333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.624364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.624666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.624696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.624964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.624995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.625267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.625299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.625575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.625606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.625839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.625869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.626073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.626103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.626377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.626409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.626682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.626712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.626879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.626910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.627133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.627164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.627457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.627487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.627729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.627761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.627979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.628010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.628301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.628333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.628555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.628586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.628720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.628750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.628967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.628997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.629245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.629276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.629480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.629511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.629714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.629744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.629910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.629941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.630243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.630274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.630496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.630526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.630694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.630725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.630946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.630976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.631277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.711 [2024-07-15 12:26:00.631309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.711 qpair failed and we were unable to recover it. 00:36:10.711 [2024-07-15 12:26:00.631538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.631569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.631796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.631827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.632024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.632054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.632338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.632370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.632614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.632644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.632818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.632849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.633095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.633125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.633344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.633381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.633593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.633624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.633912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.633943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.634107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.634139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.634354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.634385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.634678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.634709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.634997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.635027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.635332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.635364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.635656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.635688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.635984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.636014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.636291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.636323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.636599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.636629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.636966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.636997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.637209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.637247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.637554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.637584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.637886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.637916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.638200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.638241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.638396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.638427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.638723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.638753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.639024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.639054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.639419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.639452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.639724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.639753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.639967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.639997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.640268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.640301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.640522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.640552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.640775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.640806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.641093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.641123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.641426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.641458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.641633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.641664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.641943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.641973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.642171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.642201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.642349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.642381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.642708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.642738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.642996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.643027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.643323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.643355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.643580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.643611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.643768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.643799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.644085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.644116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.644361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.644393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.644620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.644651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.644979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.645015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.645318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.645350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.645492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.645522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.645676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.645706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.645957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.645987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.646142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.646172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.646406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.646438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.646667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.646697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.647055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.647085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.647350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.647381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.647527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.647557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.647793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.647823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.648123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.648154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.648449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.648480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.648783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.648813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.649043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.649074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.649279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.649310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.649507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.649537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.649768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.712 [2024-07-15 12:26:00.649799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.712 qpair failed and we were unable to recover it. 00:36:10.712 [2024-07-15 12:26:00.650080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.650111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.650382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.650412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.650617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.650647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.650957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.650988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.651201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.651245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.651479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.651510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.651749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.651779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.651952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.651983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.652282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.652314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.652600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.652631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.652817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.652847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.653049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.653079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.653372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.653403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.653577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.653607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.653781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.653811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.654146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.654176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.654466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.654497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.654791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.654822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.655138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.655168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.655461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.655493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.655706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.655737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.655946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.655982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.656113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.656144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.656417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.656450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.656674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.656705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.656952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.656982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.657264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.657296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.657598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.657629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.657797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.657828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.658032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.658062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.658283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.658315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.658457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.658487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.658724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.658754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.659052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.659082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.659315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.659347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.659564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.659595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.659817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.659848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.660122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.660151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.660404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.660436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.660638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.660669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.660967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.660997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.661277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.661309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.661544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.661574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.661839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.661869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.662107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.662137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.662339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.662371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.662643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.662674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.662893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.662923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.663129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.663160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.663439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.663470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.663715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.663746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.663989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.664020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.664257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.664289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.664562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.664593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.664916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.664947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.713 [2024-07-15 12:26:00.665257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.713 [2024-07-15 12:26:00.665289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.713 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.665514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.665544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.665765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.665794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.666037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.666067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.666411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.666443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.666667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.666697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.666902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.666933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.667089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.667119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.667415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.667446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.667670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.667700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.668015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.668045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.668343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.668376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.668625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.668654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.668983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.669014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.669294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.669325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.669453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.669483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.669700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.669730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.670023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.670054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.670218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.670260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.670458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.670488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.670672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.670703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.671012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.671043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.671312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.671343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.671594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.671624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.671906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.671937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.672095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.672126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.672394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.672426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.672647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.672678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.673002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.673032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.673256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.673287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.673558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.673589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.673887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.673917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.674137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.674167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.674394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.674432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.674706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.674738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.674895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.674926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.675123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.675153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.675392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.675424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.675763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.675793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.676011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.676040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.676268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.676300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.676481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.676511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.676749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.676780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.677076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.677106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.677355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.677386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.677657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.677688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.677883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.677914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.678122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.678153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.678377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.678409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.678652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.678683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.678992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.679023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.679247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.679279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.679507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.679537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.679690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.714 [2024-07-15 12:26:00.679720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.714 qpair failed and we were unable to recover it. 00:36:10.714 [2024-07-15 12:26:00.679980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.680010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.680265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.680298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.680480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.680512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.680750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.680780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.681000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.681031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.681256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.681288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.681523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.681553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.681722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.681752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.682124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.682155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.682383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.682415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.682572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.682602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.682815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.682845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.683121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.683156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.683361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.683396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.683583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.683613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.683817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.683850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.684079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.684110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.684432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.684464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.684737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.684767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.684993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.685029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.685335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.685367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.685617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.989 [2024-07-15 12:26:00.685648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.989 qpair failed and we were unable to recover it. 00:36:10.989 [2024-07-15 12:26:00.685814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.685844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.686138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.686168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.686378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.686410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.686704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.686735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.687009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.687040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.687283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.687315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.687609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.687639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.687859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.687889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.688158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.688189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.688404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.688436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.688697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.688728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.688974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.689004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.689203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.689263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.689564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.689596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.689757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.689789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.690040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.690070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.690282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.690314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.690523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.690555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.690778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.690810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.691150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.691180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.691427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.691459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.691708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.691738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.691969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.692000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.692201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.692240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.692464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.692495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.692716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.692746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.990 qpair failed and we were unable to recover it. 00:36:10.990 [2024-07-15 12:26:00.692965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.990 [2024-07-15 12:26:00.692994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.693291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.693323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.693496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.693527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.693747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.693777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.694009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.694040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.694367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.694398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.694711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.694741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.694995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.695024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.695341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.695374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.695617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.695648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.695935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.695966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.696194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.696257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.696573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.696604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.696889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.696920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.697219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.697264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.697531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.697561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.697775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.697805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.698097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.698127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.698354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.698387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.698704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.698734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.699006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.699037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.699202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.699242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.699516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.699546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.699814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.699845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.700117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.700147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.700417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.700452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.700697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.700728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.700995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.701025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.701328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.701360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.701648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.701681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.701919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.701951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.702252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.702284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.702453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.702484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.702673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.991 [2024-07-15 12:26:00.702702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.991 qpair failed and we were unable to recover it. 00:36:10.991 [2024-07-15 12:26:00.702855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.702885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.703096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.703126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.703398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.703429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.703598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.703630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.703904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.703934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.704153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.704183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.704402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.704434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.704583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.704613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.704896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.704926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.705077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.705108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.705353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.705384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.705604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.705635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.705967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.705998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.706283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.706315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.706631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.706661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.706941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.706971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.707250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.707282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.707590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.707626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.707918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.707950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.708260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.708291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.708568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.708599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.708802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.708833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.709105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.709135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.709337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.709369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.709566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.709597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.709833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.709864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.710156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.710186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.710437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.710468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.710733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.710763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.710987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.711017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.711336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.711369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.711596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.711627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.711948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.711979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.712279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.712311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.712547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.712578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.712802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.712832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.992 qpair failed and we were unable to recover it. 00:36:10.992 [2024-07-15 12:26:00.713048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.992 [2024-07-15 12:26:00.713079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.713283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.713314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.713525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.713555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.713876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.713906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.714203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.714256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.714539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.714568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.714789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.714819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.715170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.715200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.715491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.715522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.715687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.715718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.715921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.715952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.716204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.716247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.716488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.716518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.716740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.716771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.716972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.717003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.717242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.717274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.717507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.717537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.717758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.717789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.718076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.718107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.718284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.718315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.718512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.718542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.718782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.718817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.719103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.719134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.719406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.719437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.719688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.719718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.720078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.720108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.720335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.720366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.720654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.720685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.720915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.720945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.721235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.721266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.721489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.721519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.721714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.721744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.722034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.722065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.722290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.722322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.722518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.722548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.722777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.722809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.723135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.723166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.723409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.723441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.723577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.723608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.723877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.993 [2024-07-15 12:26:00.723906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.993 qpair failed and we were unable to recover it. 00:36:10.993 [2024-07-15 12:26:00.724175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.724206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.724463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.724495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.724716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.724746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.724986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.725016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.725248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.725279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.725506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.725537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.725811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.725841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.726098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.726128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.726380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.726412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.726634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.726665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.726888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.726919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.727128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.727158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.727356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.727388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.727612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.727642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.727877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.727908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.728056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.728087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.728340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.728372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.728658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.728688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.728977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.729008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.729211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.729252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.729476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.729507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.729806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.729842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.730118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.730149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.730352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.730384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.730567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.730597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.730868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.730897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.731216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.731258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.731422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.731453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.731655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.731684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.732008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.732039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.732307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.732339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.732495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.732525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.732677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.732707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.732854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.994 [2024-07-15 12:26:00.732884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.994 qpair failed and we were unable to recover it. 00:36:10.994 [2024-07-15 12:26:00.733162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.733193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.733445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.733476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.733635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.733666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.733813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.733843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.734089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.734119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.734324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.734356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.734558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.734588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.734728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.734759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.734990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.735020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.735301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.735331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.735542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.735573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.735827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.735858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.736131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.736161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.736408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.736440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.736608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.736638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.736861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.736891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.737183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.737213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.737390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.737421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.737666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.737697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.737938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.737969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.738250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.738281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.738456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.738486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.738692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.738723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.738984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.739014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.739248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.739279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.739497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.739527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.739697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.739727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.739950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.739986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.740265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.740296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.740513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.740544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.740745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.740776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.740920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.740950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.741271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.741302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.741513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.741543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.741763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.741794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.742088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.742118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.742286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.742318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.742491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.742522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.742674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.742704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.742908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.995 [2024-07-15 12:26:00.742939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.995 qpair failed and we were unable to recover it. 00:36:10.995 [2024-07-15 12:26:00.743240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.743273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.743559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.743589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.743759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.743789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.744024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.744054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.744207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.744247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.744398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.744428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.744646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.744676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.745007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.745038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.745277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.745307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.745459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.745490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.745761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.745791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.746003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.746033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.746328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.746361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.746563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.746592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.746742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.746774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.747073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.747103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.747429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.747461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.747688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.747719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.747979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.748009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.748240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.748271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.748539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.748570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.748911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.748945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.749200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.749254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.749478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.749509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.749709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.749739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.749953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.749983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.750273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.750305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.750455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.750491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.750788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.750818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.751122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.751152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.751388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.751421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.751640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.751671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.751888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.751918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.752141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.752172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.752422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.752453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.752706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.752736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.752951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.752981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.753202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.753243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.753484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.753514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.753746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.753776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.996 qpair failed and we were unable to recover it. 00:36:10.996 [2024-07-15 12:26:00.754002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.996 [2024-07-15 12:26:00.754032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.754352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.754384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.754654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.754684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.755052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.755083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.755291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.755324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.755594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.755625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.755900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.755931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.756217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.756257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.756538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.756568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.756879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.756909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.757169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.757201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.757558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.757590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.757751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.757780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.758020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.758051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.758401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.758433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.758603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.758634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.758952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.758982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.759218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.759259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.759505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.759535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.759806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.759837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.760083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.760114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.760412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.760443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.760669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.760699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.761024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.761054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.761316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.761348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.761508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.761538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.761832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.761864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.762166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.762201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.762442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.762473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.762698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.762728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.762983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.997 [2024-07-15 12:26:00.763012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.997 qpair failed and we were unable to recover it. 00:36:10.997 [2024-07-15 12:26:00.763313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.763345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.763508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.763538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.763712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.763741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.763977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.764008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.764155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.764186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.764377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.764408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.764622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.764652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.764821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.764851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.765117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.765147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.765364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.765396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.765559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.765589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.765805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.765835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.766110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.766142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.766435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.766468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.766644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.766674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.766993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.767023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.767298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.767330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.767551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.767580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.767787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.767817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.768088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.768118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.768393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.768425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.768659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.768690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.768903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.768933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.769209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.769251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.769473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.769503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.769775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.998 [2024-07-15 12:26:00.769805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.998 qpair failed and we were unable to recover it. 00:36:10.998 [2024-07-15 12:26:00.769973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.770003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.770304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.770336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.770553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.770583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.770838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.770870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.771153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.771183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.771404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.771436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.771774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.771804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.772022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.772053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.772275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.772306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.772462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.772492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.772798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.772834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.773108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.773138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.773439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.773470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.773689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.773720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.774038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.774068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.774345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.774376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.774614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.774644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.774982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.775013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.775244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.775276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.775448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.775479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:10.999 qpair failed and we were unable to recover it. 00:36:10.999 [2024-07-15 12:26:00.775750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.999 [2024-07-15 12:26:00.775781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.776003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.776033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.776257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.776289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.776537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.776567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.776843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.776874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.777098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.777128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.777341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.777372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.777621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.777653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.777867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.777897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.778167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.778197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.778524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.778556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.778779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.778810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.779094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.779124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.779397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.779429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.779571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.779601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.779825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.779856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.780075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.780106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.780415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.780448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.780678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.780708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.781016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.781047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.781330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.781361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.781653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.781683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.782005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.782037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.782189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.782219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.782448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.782479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.782702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.782733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.782960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.000 [2024-07-15 12:26:00.782990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.000 qpair failed and we were unable to recover it. 00:36:11.000 [2024-07-15 12:26:00.783287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.783319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.783537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.783568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.783770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.783800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.784057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.784093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.784302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.784334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.784582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.784613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.784782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.784812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.784963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.784993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.785211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.785251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.785545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.785575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.785896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.785925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.786216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.786267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.786421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.786453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.786593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.786623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.786888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.786918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.787204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.787247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.787565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.787595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.787858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.787889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.788211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.788254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.788476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.788506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.788679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.788710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.788910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.788941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.789093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.789122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.789262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.789294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.789564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.789595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.001 [2024-07-15 12:26:00.789913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.001 [2024-07-15 12:26:00.789942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.001 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.790241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.790273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.790566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.790597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.790750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.790781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.791053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.791084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.791309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.791342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.791503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.791534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.791802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.791833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.792006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.792036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.792264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.792295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.792595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.792627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.792809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.792839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.793110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.793140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.793290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.793322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.793542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.793572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.793791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.793821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.794060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.794090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.794353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.794384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.794549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.794585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.794786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.794817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.795136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.795167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.795390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.795421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.795674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.795705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.795949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.795979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.796266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.796297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.796509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.796540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.002 [2024-07-15 12:26:00.796702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.002 [2024-07-15 12:26:00.796733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.002 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.796948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.796977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.797179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.797211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.797544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.797576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.797891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.797922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.798062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.798092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.798322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.798354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.798578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.798608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.798850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.798881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.799099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.799130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.799373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.799405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.799619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.799650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.799830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.799861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.800164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.800194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.800388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.800424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.800647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.800677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.800825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.800854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.801053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.801083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.801283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.801315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.801589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.801621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.801860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.801892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.802154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.802184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.802417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.802449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.802688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.802718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.802942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.802973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.803189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.803219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.803425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.003 [2024-07-15 12:26:00.803455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.003 qpair failed and we were unable to recover it. 00:36:11.003 [2024-07-15 12:26:00.803668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.803698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.803957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.803988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.804256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.804288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.804561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.804591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.804890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.804921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.805213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.805253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.805480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.805511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.805823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.805855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.806089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.806120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.806420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.806452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.806625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.806655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.806949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.806980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.807199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.807244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.807466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.807497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.807652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.807683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.807999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.808030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.808274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.808306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.808477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.808508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.808712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.808742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.808995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.809025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.809297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.809329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.809631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.809663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.809828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.809858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.810101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.810131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.810504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.810535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.810734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.810764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.811064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.004 [2024-07-15 12:26:00.811095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.004 qpair failed and we were unable to recover it. 00:36:11.004 [2024-07-15 12:26:00.811377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.811408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.811676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.811706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.812051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.812083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.812367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.812399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.812617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.812647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.812940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.812976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.813196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.813234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.813407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.813438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.813729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.813759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.814060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.814090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.814401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.814433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.814703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.814734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.814961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.814992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.815277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.815308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.815576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.815606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.815768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.815798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.816115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.816146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.816466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.816499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.816724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.816754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.816924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.816956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.817176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.817207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.817491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.817523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.817743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.817773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.818064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.818094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.818314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.818347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.005 qpair failed and we were unable to recover it. 00:36:11.005 [2024-07-15 12:26:00.818564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.005 [2024-07-15 12:26:00.818595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.818750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.818781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.819081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.819111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.819333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.819364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.819575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.819606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.819783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.819813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.820026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.820056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.820282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.820315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.820515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.820545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.820767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.820798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.821068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.821099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.821313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.821344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.821473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.821503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.821715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.821747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.821964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.821994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.822271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.822302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.822572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.822603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.822764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.822794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.823073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.823104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.823384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.823415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.823663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.823698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.823877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.823907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.824151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.824181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.824428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.824460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.824732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.824762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.825045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.825076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.825238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.825270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.825446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.825476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.825773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.825804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.006 qpair failed and we were unable to recover it. 00:36:11.006 [2024-07-15 12:26:00.826024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.006 [2024-07-15 12:26:00.826054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.826274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.826306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.826529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.826559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.826781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.826811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.827153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.827185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.827367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.827398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.827565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.827595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.827763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.827794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.828080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.828110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.828272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.828303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.828576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.828606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.828764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.828795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.829027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.829057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.829259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.829290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.829568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.829600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.829817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.829847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.830092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.830123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.830344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.830375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.830551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.830582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.830756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.830786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.831042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.831072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.831391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.831423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.831571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.831603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.831821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.831852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.832171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.832203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.832387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.832417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.832643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.832674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.832987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.833017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.833302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.833334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.833557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.833589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.833761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.833791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.834085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.834127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.834341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.834373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.834668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.834698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.834989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.835019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.835241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.835272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.835559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.007 [2024-07-15 12:26:00.835590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.007 qpair failed and we were unable to recover it. 00:36:11.007 [2024-07-15 12:26:00.835763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.835793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.836014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.836043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.836363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.836394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.836616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.836647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.836820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.836850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.837108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.837138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.837410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.837441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.837673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.837703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.837861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.837892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.838138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.838170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.838402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.838434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.838602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.838632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.838895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.838927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.839206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.839249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.839554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.839585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.839886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.839915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.840207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.840251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.840494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.840525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.840843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.840874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.841163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.841194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.841518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.841549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.841879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.841910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.842131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.842161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.842389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.842420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.842690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.842721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.842965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.842996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.843266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.843297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.843514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.843543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.843769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.843799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.844091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.844122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.844267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.844299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.844532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.844563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.844800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.844830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.845134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.845163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.845379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.845415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.845641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.845671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.845957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.845988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.846184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.846214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.846547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.846580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.846878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.846908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.008 qpair failed and we were unable to recover it. 00:36:11.008 [2024-07-15 12:26:00.847202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.008 [2024-07-15 12:26:00.847245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.847460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.847490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.847715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.847745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.848034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.848064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.848382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.848414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.848591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.848622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.848875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.848906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.849127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.849158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.849415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.849447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.849740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.849770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.850005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.850036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.850321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.850352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.850550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.850582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.850822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.850852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.851085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.851115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.851362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.851393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.851678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.851708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.851941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.851971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.852252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.852284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.852451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.852483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.852655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.852686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.852973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.853003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.853300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.853332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.853576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.853605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.853825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.853856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.854076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.854106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.854327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.854358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.854569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.854600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.854966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.854998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.855222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.855264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.855433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.855463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.855750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.855780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.856027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.856058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.856329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.856380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.856632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.856668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.856916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.856947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.857162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.857192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.857353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.857384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.857621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.857651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.857802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.857833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.009 qpair failed and we were unable to recover it. 00:36:11.009 [2024-07-15 12:26:00.858076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.009 [2024-07-15 12:26:00.858106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.858303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.858336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.858468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.858498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.858790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.858819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.859137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.859169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.859357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.859388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.859631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.859662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.859832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.859862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.860154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.860184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.860368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.860401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.860604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.860635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.860843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.860875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.861145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.861175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.861337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.861368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.861664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.861694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.861950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.861980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.862298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.862329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.862546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.862576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.862846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.862876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.863193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.863223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.863554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.863585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.863788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.863819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.864047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.864077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.864300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.864332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.864543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.864573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.864816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.864846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.865115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.865145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.865355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.865387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.865514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.865545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.865765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.865795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.866116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.866147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.866308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.866340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.866562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.866592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.010 [2024-07-15 12:26:00.866762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.010 [2024-07-15 12:26:00.866792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.010 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.867017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.867052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.867367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.867399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.867709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.867740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.867954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.867984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.868255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.868287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.868489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.868520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.868670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.868701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.868936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.868966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.869246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.869277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.869424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.869455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.869747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.869777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.870009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.870039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.870265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.870296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.870447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.870478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.870753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.870783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.871100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.871130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.871412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.871443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.871655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.871685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.871969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.872000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.872311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.872343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.872622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.872652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.872870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.872900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.873117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.873147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.873348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.873380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.873607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.873637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.873799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.011 [2024-07-15 12:26:00.873829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.011 qpair failed and we were unable to recover it. 00:36:11.011 [2024-07-15 12:26:00.874029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.874061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.874367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.874399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.874560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.874592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.874909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.874940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.875145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.875174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.875424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.875456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.875673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.875703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.875848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.875878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.876184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.876215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.876502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.876533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.876706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.876736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.877097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.877126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.877290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.877321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.877534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.877564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.877765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.877801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.878138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.878170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.878412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.878444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.878660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.878691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.878908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.878938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.879082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.879112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.879311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.879344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.879598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.879628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.879847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.879878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.880076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.880107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.880309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.880340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.880564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.012 [2024-07-15 12:26:00.880595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.012 qpair failed and we were unable to recover it. 00:36:11.012 [2024-07-15 12:26:00.880840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.880871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.881144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.881176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.881421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.881453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.881618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.881649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.881942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.881973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.882266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.882298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.882519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.882549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.882824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.882854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.883058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.883088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.883392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.883424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.883637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.883667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.883982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.884013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.884208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.884250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.884477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.884507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.884799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.884829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.885052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.885082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.885386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.885418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.885650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.885680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.885907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.885938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.886139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.886169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.886408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.886439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.886610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.886641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.886865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.886895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.887109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.887139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.887429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.887462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.887685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.887715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.013 [2024-07-15 12:26:00.888080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.013 [2024-07-15 12:26:00.888110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.013 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.888383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.888415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.888633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.888673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.888937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.888967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.889121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.889151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.889372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.889403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.889605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.889635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.889833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.889864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.890064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.890094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.890315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.890346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.890490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.890520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.890803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.890833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.891038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.891068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.891345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.891378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.891662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.891692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.891907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.891938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.892216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.892257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.892418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.892448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.892620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.892650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.892892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.892921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.893216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.893256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.893598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.893629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.893801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.893831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.894056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.894087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.894332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.894364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.894580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.894610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.014 qpair failed and we were unable to recover it. 00:36:11.014 [2024-07-15 12:26:00.894818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.014 [2024-07-15 12:26:00.894849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.895139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.895170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.895401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.895433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.895657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.895689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.896008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.896038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.896179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.896210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.896461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.896493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.896714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.896744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.896989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.897020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.897326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.897358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.897647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.897677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.897976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.898008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.898301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.898333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.898534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.898564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.898879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.898910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.899199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.899253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.899428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.899464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.899668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.899698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.899939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.899969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.900290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.900322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.900623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.900654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.900921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.900952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.901263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.901295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.901576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.901606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.901818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.901849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.015 qpair failed and we were unable to recover it. 00:36:11.015 [2024-07-15 12:26:00.902118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.015 [2024-07-15 12:26:00.902148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.902362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.902394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.902628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.902659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.902877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.902908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.903152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.903182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.903480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.903511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.903733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.903764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.904131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.904162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.904413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.904445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.904667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.904697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.905004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.905035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.905323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.905355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.905595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.905625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.905909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.905939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.906145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.906176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.906468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.906500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.906754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.906786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.907121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.907152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.907399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.907430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.907608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.907638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.907912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.907942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.908241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.908273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.908586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.908616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.908907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.908938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.909186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.909217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.909516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.909547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.016 [2024-07-15 12:26:00.909704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.016 [2024-07-15 12:26:00.909734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.016 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.910056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.910086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.910286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.910318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.910565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.910595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.910815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.910845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.911179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.911215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.911529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.911560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.911889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.911919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.912139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.912169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.912462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.912493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.912664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.912694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.912972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.913003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.913286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.913318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.913548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.913578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.913878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.913908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.914201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.914240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.914464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.914494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.914792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.914822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.915114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.915144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.915395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.915428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.915588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.915619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.017 qpair failed and we were unable to recover it. 00:36:11.017 [2024-07-15 12:26:00.915845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.017 [2024-07-15 12:26:00.915875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.916173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.916204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.916432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.916463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.916623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.916652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.916931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.916962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.917255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.917288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.917491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.917522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.917764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.917795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.917993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.918023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.918316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.918348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.918581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.918611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.918891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.918922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.919143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.919173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.919385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.919416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.919687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.919719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.919924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.919954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.920193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.920238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.920455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.920485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.920673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.920703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.920948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.920979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.921272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.921303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.921506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.921537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.921758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.921789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.922058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.922088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.922332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.922369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.922667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.922697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.922930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.922961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.018 qpair failed and we were unable to recover it. 00:36:11.018 [2024-07-15 12:26:00.923245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.018 [2024-07-15 12:26:00.923276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.923520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.923551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.923823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.923852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.924073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.924104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.924319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.924351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.924520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.924552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.924776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.924806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.924987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.925017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.925258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.925291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.925515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.925546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.925758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.925788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.926106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.926140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.926381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.926414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.926615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.926645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.926965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.926995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.927144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.927175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.927419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.927451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.927673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.927703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.928000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.928031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.928264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.928295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.928593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.928623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.928929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.928960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.929251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.929282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.929551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.929581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.929796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.929827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.930098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.930128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.930406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.930437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.019 qpair failed and we were unable to recover it. 00:36:11.019 [2024-07-15 12:26:00.930593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.019 [2024-07-15 12:26:00.930623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.930895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.930925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.931072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.931102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.931400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.931430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.931598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.931629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.931920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.931950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.932171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.932201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.932448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.932479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.932773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.932804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.933074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.933105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.933418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.933455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.933676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.933706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.933961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.933992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.934207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.934251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.934474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.934505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.934752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.934782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.935106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.935136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.935443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.935475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.935788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.935818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.936130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.936161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.936474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.936505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.936750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.936781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.936996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.937026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.937298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.937329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.937578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.937608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.937889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.937920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.938211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.938250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.938461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.938492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.938713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.938742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.938979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.939009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.939303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.939334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.939556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.939585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.939816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.939845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.940065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.940094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.020 [2024-07-15 12:26:00.940303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.020 [2024-07-15 12:26:00.940334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.020 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.940629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.940659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.940873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.940904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.941201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.941256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.941421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.941452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.941724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.941754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.942066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.942097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.942355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.942387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.942554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.942584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.942897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.942928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.943204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.943248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.943453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.943483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.943743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.943773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.944006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.944036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.944167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.944197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.944495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.944526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.944795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.944825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.945134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.945164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.945379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.945411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.945682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.945712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.946030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.946060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.946278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.946311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.946606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.946636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.946904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.946935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.947154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.947185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.947492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.947524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.947797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.947827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.948146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.948176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.948413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.948445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.948598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.948628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.948851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.948882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.949095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.949125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.949344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.949374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.949536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.949567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.949841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.949871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.950139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.950169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.950396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.950428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.950715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.950745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.950965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.950996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.021 qpair failed and we were unable to recover it. 00:36:11.021 [2024-07-15 12:26:00.951214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.021 [2024-07-15 12:26:00.951256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.951526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.951556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.951825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.951855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.952009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.952039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.952304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.952341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.952612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.952642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.952925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.952955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.953282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.953314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.953610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.953640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.953958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.953989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.954295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.954326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.954530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.954561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.954779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.954808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.955006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.955035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.955321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.955352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.955569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.955599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.955895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.955926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.956220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.956263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.956584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.956614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.956928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.956958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.957248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.957280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.957501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.957531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.957818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.957849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.958048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.958078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.958333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.958364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.958687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.958718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.958935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.958965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.959256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.959288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.959587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.959617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.959786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.959816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.960100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.960130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.960431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.960463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.960752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.960782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.961047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.022 [2024-07-15 12:26:00.961077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.022 qpair failed and we were unable to recover it. 00:36:11.022 [2024-07-15 12:26:00.961299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.961332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.961622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.961653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.961921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.961951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.962260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.962292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.962577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.962607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.962903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.962933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.963153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.963183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.963482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.963513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.963713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.963743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.963944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.963974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.964194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.964237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.964508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.964539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.964806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.964836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.965155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.965185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.965500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.965532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.965726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.965756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.965904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.965934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.966170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.966200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.966415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.966445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.966713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.966743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.967046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.967076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.967319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.967350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.967705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.967735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.968001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.968032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.968371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.968403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.968682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.968712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.968875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.968906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.969235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.969266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.969517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.969548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.969814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.969844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.970114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.970146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.970366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.970396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.970616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.970646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.970888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.970918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.971117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.971148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.971442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.023 [2024-07-15 12:26:00.971474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.023 qpair failed and we were unable to recover it. 00:36:11.023 [2024-07-15 12:26:00.971756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.024 [2024-07-15 12:26:00.971787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.024 qpair failed and we were unable to recover it. 00:36:11.024 [2024-07-15 12:26:00.972061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.024 [2024-07-15 12:26:00.972092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.024 qpair failed and we were unable to recover it. 00:36:11.024 [2024-07-15 12:26:00.972409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.024 [2024-07-15 12:26:00.972440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.024 qpair failed and we were unable to recover it. 00:36:11.024 [2024-07-15 12:26:00.972738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.024 [2024-07-15 12:26:00.972768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.024 qpair failed and we were unable to recover it. 00:36:11.304 [2024-07-15 12:26:00.973000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.304 [2024-07-15 12:26:00.973033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.304 qpair failed and we were unable to recover it. 00:36:11.304 [2024-07-15 12:26:00.973256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.304 [2024-07-15 12:26:00.973288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.304 qpair failed and we were unable to recover it. 00:36:11.304 [2024-07-15 12:26:00.973533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.304 [2024-07-15 12:26:00.973564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.304 qpair failed and we were unable to recover it. 00:36:11.304 [2024-07-15 12:26:00.973848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.304 [2024-07-15 12:26:00.973879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.304 qpair failed and we were unable to recover it. 00:36:11.304 [2024-07-15 12:26:00.974171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.304 [2024-07-15 12:26:00.974202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.304 qpair failed and we were unable to recover it. 00:36:11.304 [2024-07-15 12:26:00.974453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.304 [2024-07-15 12:26:00.974484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.304 qpair failed and we were unable to recover it. 00:36:11.304 [2024-07-15 12:26:00.974800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.304 [2024-07-15 12:26:00.974830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.304 qpair failed and we were unable to recover it. 00:36:11.304 [2024-07-15 12:26:00.975135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.975166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.975345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.975376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.975656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.975686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.975883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.975921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.976149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.976180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.976486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.976518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.976805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.976835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.977066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.977097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.977313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.977344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.977563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.977593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.977910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.977941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.978215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.978256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.978528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.978559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.978786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.978816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.979108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.979138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.979411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.979442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.979655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.979684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.979980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.980011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.980235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.980267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.980563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.980593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.980825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.980855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.981141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.981170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.981406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.981438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.981663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.981693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.981941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.981972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.982265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.982296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.982546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.982575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.982790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.982821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.982973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.983003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.983157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.983186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.983446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.983477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.983792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.983822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.984035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.984065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.984208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.984246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.984493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.984524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.984844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.984874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.985182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.985212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.985498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.985527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.985820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.985850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.986150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.986180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.986401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.986433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.986598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.986628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.986898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.986928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.987185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.987221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.987534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.987564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.987839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.987868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.988106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.988136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.988407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.988438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.988744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.988774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.989056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.989087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.989317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.989348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.305 [2024-07-15 12:26:00.989641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.305 [2024-07-15 12:26:00.989671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.305 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.989875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.989905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.990106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.990135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.990409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.990440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.990660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.990690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.990928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.990958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.991172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.991202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.991509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.991540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.991849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.991878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.992117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.992147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.992452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.992485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.992769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.992799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.993097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.993128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.993340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.993371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.993645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.993676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.993993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.994024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.994245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.994276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.994486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.994517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.994809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.994840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.995116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.995146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.995295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.995327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.995597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.995627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.995914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.995943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.996251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.996283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.996410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.996441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.996737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.996766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.996966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.996996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.997300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.997332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.997553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.997583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.997726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.997756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.998001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.998031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.998323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.998354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.998624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.998660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.998978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.999008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.999308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.999339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.999628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.999658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:00.999880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:00.999910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.000155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.000185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.000429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.000460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.000699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.000729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.001024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.001054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.001350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.001383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.001677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.001707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.001997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.002027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.002319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.002351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.002687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.002718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.002928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.002958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.003253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.003284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.003425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.003456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.003657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.003687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.003890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.003921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.004136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.004166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.004322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.004353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.004626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.004656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.004974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.005005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.306 [2024-07-15 12:26:01.005214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.306 [2024-07-15 12:26:01.005256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.306 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.005543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.005573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.005789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.005819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.006088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.006118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.006439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.006471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.006741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.006771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.006993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.007024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.007246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.007277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.007571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.007601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.007892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.007922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.008191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.008221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.008526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.008556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.008773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.008804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.009071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.009101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.009418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.009449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.009665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.009695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.009936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.009966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.010285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.010322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.010622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.010652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.010938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.010968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.011245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.011277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.011490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.011521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.011813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.011843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.012165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.012195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.012408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.012439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.012736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.012766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.013068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.013098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.013387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.013418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.013671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.013701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.013991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.014021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.014314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.014345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.014647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.014677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.014965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.014996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.015299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.015330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.015541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.015571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.015846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.015876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.016143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.016172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.016496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.016528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.016815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.016845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.017144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.017174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.017401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.017433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.017717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.017747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.017980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.018011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.018213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.018255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.018391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.018421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.018716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.018746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.018967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.018996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.019149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.019179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.019481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.019512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.019811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.019841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.020156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.020186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.307 [2024-07-15 12:26:01.020347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.307 [2024-07-15 12:26:01.020378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.307 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.020654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.020684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.020842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.020871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.021138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.021169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.021471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.021503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.021791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.021822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.022065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.022100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.022336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.022369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.022574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.022604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.022897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.022927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.023222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.023262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.023473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.023503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.023738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.023768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.024061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.024091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.024313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.024343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.024638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.024668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.024907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.024938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.025263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.025296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.025550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.025581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.025901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.025932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.026160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.026190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.026359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.026390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.026662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.026692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.026980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.027011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.027340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.027373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.027615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.027646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.027899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.027930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.028265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.028297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.028591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.028621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.028915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.028945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.029155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.029186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.029495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.029526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.029772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.029802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.030156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.030241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.030505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.030540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.030836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.030866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.031154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.031185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.031517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.031549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.031841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.031871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.032167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.032197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.032525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.032556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.032779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.032810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.308 qpair failed and we were unable to recover it. 00:36:11.308 [2024-07-15 12:26:01.033076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.308 [2024-07-15 12:26:01.033106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.033330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.033362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.033518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.033548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.033864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.033895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.034134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.034174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.034363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.034395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.034604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.034638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.034876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.034906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.035144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.035174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.035411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.035443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.035657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.035687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.035985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.036015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.036314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.036345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.036558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.036588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.036859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.036890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.037126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.037157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.037384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.037416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.037655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.037685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.037987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.038018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.038313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.038344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.038508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.038538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.038836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.038867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.039093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.039123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.039417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.039449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.039627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.039657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.039936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.039966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.040266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.040298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.040454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.040484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.040724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.040755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.041054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.041084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.041324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.041356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.041662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.041695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.041842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.041872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.042171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.042202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.042433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.042464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.042665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.042696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.042897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.042927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.043203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.043240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.043403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.043433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.043729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.043760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.044055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.044085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.044300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.044331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.044566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.044596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.044866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.044896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.045168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.045203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.045521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.045552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.045775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.045805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.046068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.046098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.046311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.046342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.046543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.046574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.046866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.046897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.047213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.047265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.047487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.047518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.047719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.047748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.048078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.048108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.048399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.048431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.048673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.048703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.048979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.049010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.049158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.309 [2024-07-15 12:26:01.049188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.309 qpair failed and we were unable to recover it. 00:36:11.309 [2024-07-15 12:26:01.049359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.049391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.049552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.049583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.049875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.049905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.050204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.050241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.050533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.050567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.050805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.050840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.051130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.051160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.051362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.051394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.051526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.051556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.051696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.051726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.051944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.051974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.052268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.052299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.052518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.052549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.052819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.052849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.053051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.053081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.053337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.053370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.053519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.053549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.053752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.053782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1367131 Killed "${NVMF_APP[@]}" "$@" 00:36:11.310 [2024-07-15 12:26:01.054114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.054145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.054388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.054420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:11.310 [2024-07-15 12:26:01.054663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.054694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:11.310 [2024-07-15 12:26:01.054930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.054962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:11.310 [2024-07-15 12:26:01.055182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.055214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:11.310 [2024-07-15 12:26:01.055548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.055592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.310 [2024-07-15 12:26:01.055872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.055904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.056179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.056209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.056520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.056551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.056771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.056801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.057033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.057063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.057278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.057310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.057553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.057584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.057901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.057931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.058163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.058193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.058496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.058528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.058834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.058863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.059168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.059198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.059488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.059524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.059739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.059771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.060006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.060036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.060326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.060357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.060628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.060658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.060948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.060977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.061280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.061310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.061523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.061553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.061825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.061855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.062177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.062208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1367844 00:36:11.310 [2024-07-15 12:26:01.062451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.062483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1367844 00:36:11.310 [2024-07-15 12:26:01.062782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:11.310 [2024-07-15 12:26:01.062812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1367844 ']' 00:36:11.310 [2024-07-15 12:26:01.063113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.063145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.063389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.063422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.063671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.063703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 [2024-07-15 12:26:01.063923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.310 [2024-07-15 12:26:01.063953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.310 qpair failed and we were unable to recover it. 00:36:11.310 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.311 [2024-07-15 12:26:01.064179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.064211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:11.311 [2024-07-15 12:26:01.064429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.064462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.311 [2024-07-15 12:26:01.064675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.064707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.064842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.064872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.065146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.065176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.065444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.065476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.065726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.065764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.065935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.065965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.066185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.066219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.066525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.066556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.066799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.066829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.067178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.067208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.067453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.067484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.067729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.067759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.067989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.068019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.068217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.068258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.068507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.068538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.068835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.068866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.069109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.069139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.069437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.069469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.069762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.069794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.069959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.069989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.070269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.070301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.070528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.070558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.070732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.070762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.071050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.071080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.071298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.071332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.071630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.071661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.071897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.071927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.072201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.072240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.072511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.072543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.072856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.072886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.073099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.073129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.073402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.073441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.073729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.073760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.074003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.074033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.074312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.074344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.074626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.074656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.075011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.075041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.075319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.075353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.075597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.075627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.075919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.075953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.076223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.076261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.076471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.076502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.076742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.076772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.077073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.311 [2024-07-15 12:26:01.077103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.311 qpair failed and we were unable to recover it. 00:36:11.311 [2024-07-15 12:26:01.077398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.077430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.077730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.077761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.077968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.077998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.078297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.078328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.078529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.078560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.078856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.078886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.079175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.079205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.079484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.079515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.079788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.079818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.080116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.080146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.080378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.080409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.080556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.080587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.080799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.080829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.081024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.081054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.081390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.081422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.081638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.081669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.081838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.081868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.082167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.082197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.082371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.082401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.082697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.082727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.082959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.082989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.083208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.083260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.083484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.083515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.083803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.083832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.084131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.084162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.084455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.084487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.084684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.084714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.085044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.085079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.085281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.085311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.085529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.085559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.085875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.085906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.086198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.086240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.086476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.086506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.086746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.086776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.087125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.087155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.087451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.087482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.087708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.087739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.087959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.087989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.088188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.088217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.088451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.088482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.088752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.088782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.088987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.089017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.089235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.089266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.089585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.089616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.089901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.089931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.090236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.090267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.090464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.090494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.090719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.090749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.090994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.091023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.091267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.091298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.091568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.091599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.091759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.091789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.092062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.092092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.092406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.092438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.312 [2024-07-15 12:26:01.092735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.312 [2024-07-15 12:26:01.092766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.312 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.092923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.092954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.093177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.093207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.093452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.093483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.093765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.093795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.094032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.094062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.094290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.094321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.094590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.094621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.094904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.094934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.095183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.095212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.095538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.095568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.095885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.095916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.096139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.096169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.096480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.096517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.096824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.096855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.097135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.097165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.097407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.097439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.097654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.097684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.097957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.097987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.098138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.098169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.098401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.098432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.098675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.098705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.099024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.099058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.099277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.099310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.099604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.099635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.099808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.099838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.100044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.100074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.100308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.100339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.100605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.100634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.100832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.100862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.101024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.101054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.101185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.101215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.101460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.101491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.101662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.101693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.101819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.101851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.102140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.102170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.102313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.102345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.102685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.102715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.102862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.102895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.103095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.103124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.103429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.103462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.103758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.103789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.104030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.104059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.104303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.104334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.104505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.104535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.104761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.104791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.105098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.105128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.105425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.105457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.105730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.105760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.106083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.106116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.106337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.106370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.106594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.106624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.106955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.106986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.107275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.107313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.107518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.107550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.313 qpair failed and we were unable to recover it. 00:36:11.313 [2024-07-15 12:26:01.107702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.313 [2024-07-15 12:26:01.107733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.107961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.107994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.108260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.108292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.108489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.108519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.108820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.108851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.109073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.109103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.109258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.109289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.109429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.109459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.109630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.109659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.109856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.109885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.110097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.110129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.110449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.110480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.110692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.110722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.110910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.110940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.111078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.111107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.111150] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:36:11.314 [2024-07-15 12:26:01.111198] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:11.314 [2024-07-15 12:26:01.111349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.111381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.111606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.111634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.111921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.111952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.112080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.112110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.112275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.112305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.112540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.112570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.112718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.112749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.112894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.112924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.113222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.113260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.113420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.113451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.113673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.113703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.113918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.113948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.114164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.114194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.114324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.114355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.114526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.114558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.114830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.114860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.115075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.115106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.115268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.115301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.115439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.115469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.115621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.115651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.115784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.115814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.116025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.116055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.116252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.116327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.116595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.116667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.116961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.116995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.117217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.117261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.117486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.117517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.117782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.117812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.118008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.314 [2024-07-15 12:26:01.118037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.314 qpair failed and we were unable to recover it. 00:36:11.314 [2024-07-15 12:26:01.118275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.118306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.118451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.118480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.118767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.118797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.119064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.119094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.119300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.119330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.119540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.119569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.119780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.119819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.120030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.120059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.120187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.120216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.120422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.120452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.120587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.120616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.120770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.120799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.121005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.121034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.121181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.121211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.121492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.121523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.121734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.121763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.121893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.121923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.122125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.122155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.122281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.122312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.122510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.122540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.122688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.122719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.122854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.122884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.123091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.123122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.123276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.123307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.123519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.123549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.123768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.123799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.123938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.123972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.124183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.124213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.124354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.124384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.124583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.124613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.124742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.124772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.124904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.124934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.125219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.125258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.125488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.125521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.125681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.125710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.125864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.125893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.126033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.126062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.126255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.126286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.126551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.126581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.126852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.126882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.127040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.127070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.127199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.127238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.127352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.127382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.127646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.127675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.127940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.127970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.128133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.128163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.128414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.128450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.128592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.128621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.128772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.128802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.128982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.129012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.129240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.129271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.129406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.129435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.129562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.129593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.129882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.129911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.130042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.130071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.130209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.130245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.130387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.315 [2024-07-15 12:26:01.130417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.315 qpair failed and we were unable to recover it. 00:36:11.315 [2024-07-15 12:26:01.130613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.130644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.130850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.130879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.131089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.131119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.131246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.131278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.131507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.131537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.131686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.131717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.131851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.131880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.132076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.132106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.132309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.132341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.132471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.132500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.132630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.132660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.132951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.132980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.133133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.133163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.133356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.133387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.133581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.133611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.133809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.133838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.134049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.134084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.134302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.134334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.134460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.134490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.134689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.134720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.134932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.134961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.135117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.135147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.135276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.135306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.135516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.135546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.135741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.135770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.136046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.136078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.136340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.136371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.136484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.136513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.136726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.136756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.136959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.136994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.137141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.137171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.137386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.137417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.137632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.137662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.137936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.137966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.138252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.138283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.138497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.138527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.138670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.138700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.138923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.138952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.139170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.139200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.139405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.139436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.139626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.139655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.139854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.139884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.140079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.140108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.140317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.140348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 EAL: No free 2048 kB hugepages reported on node 1 00:36:11.316 [2024-07-15 12:26:01.140578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.140608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.140754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.140783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.140927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.140957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.141106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.141135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.141330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.141361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.141486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.141517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.141676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.141706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.141920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.141950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.142154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.142183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.142375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.142407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.142616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.142646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.316 [2024-07-15 12:26:01.142810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.316 [2024-07-15 12:26:01.142842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.316 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.143111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.143141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.143286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.143318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.143448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.143478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.143620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.143650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.143855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.143885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.144080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.144110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.144316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.144346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.144584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.144615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.144805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.144835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.145063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.145093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.145354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.145385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.145630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.145660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.145866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.145896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.146158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.146193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.146432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.146462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.146678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.146708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.146910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.146940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.147148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.147178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.147400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.147431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.147647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.147677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.147943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.147973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.148209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.148252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.148464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.148494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.148748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.148778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.149039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.149069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.149351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.149382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.149584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.149613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.149809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.149839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.150031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.150061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.150201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.150238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.150383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.150412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.150637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.150666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.150903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.150933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.151138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.151167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.151296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.151327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.151512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.151542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.151800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.151829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.152118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.152148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.152301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.152331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.152495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.152526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.152844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.152879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.153094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.153123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.153276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.153307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.153519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.153548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.153840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.153870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.154072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.154102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.154243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.154273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.154538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.154568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.154774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.154804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.154960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.154990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.155211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.317 [2024-07-15 12:26:01.155247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.317 qpair failed and we were unable to recover it. 00:36:11.317 [2024-07-15 12:26:01.155461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.155491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.155682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.155712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.155862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.155898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.156104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.156133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.156341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.156373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.156607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.156636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.156773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.156803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.156937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.156966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.157120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.157150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.157346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.157376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.157507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.157537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.157748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.157779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.158062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.158092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.158282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.158313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.158465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.158496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.158698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.158728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.158949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.158979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.159192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.159222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.159378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.159408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.159608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.159638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.159871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.159901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.160209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.160247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.160389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.160419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.160578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.160608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.160872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.160902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.161185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.161215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.161510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.161540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.161688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.161719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.161940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.161971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.162190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.162231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.162444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.162475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.162615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.162645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.162931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.162961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.163167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.163197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.163478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.163508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.163791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.163821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.164022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.164054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.164265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.164295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.164503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.164533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.164738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.164768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.164982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.165011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.165215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.165256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.165403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.165439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.165663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.165693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.165861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.165890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.166147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.166176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.166316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.166347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.166604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.166634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.166927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.166957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.167252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.167283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.167475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.167504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.167708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.167738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.318 qpair failed and we were unable to recover it. 00:36:11.318 [2024-07-15 12:26:01.167954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.318 [2024-07-15 12:26:01.167984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.168133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.168182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.168382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.168413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.168632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.168661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.168906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.168937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.169194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.169223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.169436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.169465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.169759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.169789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.170020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.170049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.170272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.170303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.170507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.170536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.170725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.170755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.170916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.170945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.171095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.171125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.171330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.171360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.171515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.171544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.171741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.171770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.171978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.172012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.172337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.172367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.172598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.172627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.172756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.172786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.173058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.173087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.173284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.173315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.173588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.173616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.173808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.173838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.174052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.174082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.174300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.174331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.174607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.174638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.174850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.174879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.175084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.175114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.175331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.175366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.175596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.175626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.175891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.175921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.176057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.176086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.176349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.176379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.176511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.176541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.176751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.176780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.176996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.177026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.177215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.177253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.177389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.177419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.177613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.177643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.177844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.177874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.178077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.178107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.178390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.178421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.178582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.178612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.178742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.178771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.178979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.179008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.179200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.179235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.179515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.179545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.179760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.179790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.179992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.180021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.180318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.180349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.180507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.180536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.180688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.180717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.180936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.180966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.319 [2024-07-15 12:26:01.181158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.319 [2024-07-15 12:26:01.181188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.319 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.181392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.181423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.181634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.181668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.181894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.181924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.182125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.182155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.182361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.182392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.182599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.182629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.182905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.182933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.183127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.183157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.183294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.183325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.183602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.183631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.183824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.183854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.184057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.184086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.184219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.184256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.184397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.184426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.184626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.184660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.184891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.184920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.185111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.185140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.185297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.185327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.185406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:11.320 [2024-07-15 12:26:01.185581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.185610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.185812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.185841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.186057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.186089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.186195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.186233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.186512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.186542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.186757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.186786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.186982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.187011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.187218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.187259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.187455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.187485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.187696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.187726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.187924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.187954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.188159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.188189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.188351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.188382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.188600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.188630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.188825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.188854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.189058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.189087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.189315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.189347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.189564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.189594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.189802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.189831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.189970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.189999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.190294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.190326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.190556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.190586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.190797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.190827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.191033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.191063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.191199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.191235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.191364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.191395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.191615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.191645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.191834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.191863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.192127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.192157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.320 [2024-07-15 12:26:01.192348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.320 [2024-07-15 12:26:01.192380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.320 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.192523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.192552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.192835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.192866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.193068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.193099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.193327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.193359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.193566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.193596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.193802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.193831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.194029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.194065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.194221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.194260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.194473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.194503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.194719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.194750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.194958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.194988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.195127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.195157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.195355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.195387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.195641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.195670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.195809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.195840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.196029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.196060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.196187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.196216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.196451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.196481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.196668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.196698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.196907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.196937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.197139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.197168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.197317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.197348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.197540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.197569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.197822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.197851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.198060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.198089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.198296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.198326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.198607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.198637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.198868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.198897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.199165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.199195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.199399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.199429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.199641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.199670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.199888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.199918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.200113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.200143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.200450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.200480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.200733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.200763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.200961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.200990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.201274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.201304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.201449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.201478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.201678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.201707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.201832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.201862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.202043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.202072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.202296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.202326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.202593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.202623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.202814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.202843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.203022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.203051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.203304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.203334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.203484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.203519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.203796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.203826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.203964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.203994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.204259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.204291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.204484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.321 [2024-07-15 12:26:01.204513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.321 qpair failed and we were unable to recover it. 00:36:11.321 [2024-07-15 12:26:01.204731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.204760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.205020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.205050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.205265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.205297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.205448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.205479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.205736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.205769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.205986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.206019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.206161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.206191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.206411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.206443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.206703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.206736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.206962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.206998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.207191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.207222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.207380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.207410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.207612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.207642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.207784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.207815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.208075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.208107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.208311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.208361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.208549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.208579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.208743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.208773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.208971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.209001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.209193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.209222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.209417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.209447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.209563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.209593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.209910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.209939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.210161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.210190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.210451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.210482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.210676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.210705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.210913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.210942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.211076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.211106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.211302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.211332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.211565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.211596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.211738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.211767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.211903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.211933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.212060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.212089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.212249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.212281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.212553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.212583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.212834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.212870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.213148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.213178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.213344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.213374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.213629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.213659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.213919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.213949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.214140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.214170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.214392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.214423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.214642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.214672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.214884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.214914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.215103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.215133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.215401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.215431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.215556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.215586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.215810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.215840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.216044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.216073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.216287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.216318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.216531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.216560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.216769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.216798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.216932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.216961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.217163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.322 [2024-07-15 12:26:01.217192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.322 qpair failed and we were unable to recover it. 00:36:11.322 [2024-07-15 12:26:01.217484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.217514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.217764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.217794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.218091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.218121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.218374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.218404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.218612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.218641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.218782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.218812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.219088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.219117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.219320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.219351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.219568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.219597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.219852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.219881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.220169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.220199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.220445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.220475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.220616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.220645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.220845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.220875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.221153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.221181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.221325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.221356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.221554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.221584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.221729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.221758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.221967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.221996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.222279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.222309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.222515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.222545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.222750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.222784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.223015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.223044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.223278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.223309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.223494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.223524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.223661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.223691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.223866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.223897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.224039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.224068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.224189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.224217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.224346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.224376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.224554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.224583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.224774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.224803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.225010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.225040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.225243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.225273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.225393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.225422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.225583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.225613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.225820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.225850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.226066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.226097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.226289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.226321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.226511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.226540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.226666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.226695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.226952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.226982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.227245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.227276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 [2024-07-15 12:26:01.227279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.227313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:11.323 [2024-07-15 12:26:01.227321] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:11.323 [2024-07-15 12:26:01.227327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:11.323 [2024-07-15 12:26:01.227333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:11.323 [2024-07-15 12:26:01.227487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.227517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 [2024-07-15 12:26:01.227444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.227552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:36:11.323 [2024-07-15 12:26:01.227661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:36:11.323 [2024-07-15 12:26:01.227787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.227661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:36:11.323 [2024-07-15 12:26:01.227827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.228063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.228091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.228223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.228275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.228414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.228443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.228697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.228726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.228979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.229009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.229194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.323 [2024-07-15 12:26:01.229235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.323 qpair failed and we were unable to recover it. 00:36:11.323 [2024-07-15 12:26:01.229428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.229458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.229709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.229738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.229942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.229971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.230115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.230143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.230276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.230307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.230576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.230606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.230735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.230764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.231045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.231103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.231168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f1b60 (9): Bad file descriptor 00:36:11.324 [2024-07-15 12:26:01.231469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.231533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.231695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.231729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.231942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.231973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.232181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.232212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.232422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.232453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.232642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.232672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.232798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.232828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.233013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.233043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.233246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.233276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.233482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.233512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.233779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.233809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.234028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.234058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.234254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.234285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.234537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.234567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.234869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.234899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.235096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.235126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.235275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.235305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.235444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.235473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.235749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.235779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.235974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.236004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.236192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.236222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.236433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.236462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.236614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.236643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.236865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.236895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.237031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.237061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.237252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.237288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.237418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.237448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.237646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.237675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.237861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.237890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.238113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.238143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.238334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.238365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.238621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.238650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.238839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.238869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.239021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.239050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.239264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.324 [2024-07-15 12:26:01.239295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.324 qpair failed and we were unable to recover it. 00:36:11.324 [2024-07-15 12:26:01.239495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.239525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.239781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.239811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.239937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.239967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.240141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.240171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.240328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.240359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.240549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.240579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.240834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.240864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.241077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.241108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.241261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.241291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.241499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.241530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.241656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.241686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.241884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.241913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.242126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.242156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.242356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.242386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.242639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.242668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.242810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.242840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.242979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.243009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.243291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.243323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.243580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.243611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.243804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.243835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.243991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.244022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.244244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.244276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.244403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.244434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.244570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.244601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.244827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.244858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.245007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.245037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.245162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.245193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.245369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.245411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.245680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.245711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.245938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.245969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.246167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.246204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.246455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.246488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.246691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.246721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.246859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.246888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.247087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.247118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.247382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.247415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.247604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.247635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.247767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.247798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.247945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.247976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.248184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.248217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.248511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.248543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.248679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.248710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.248849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.248879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.249090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.249120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.249337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.249369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.249560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.249592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.249784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.249816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.250098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.250130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.250320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.250350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.250580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.250611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.250869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.250902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.251045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.251075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.251260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.251293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.251547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.251578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.325 qpair failed and we were unable to recover it. 00:36:11.325 [2024-07-15 12:26:01.251710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.325 [2024-07-15 12:26:01.251740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.251966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.251996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.252180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.252212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.252516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.252549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.252846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.252878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.253155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.253187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.253476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.253509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.253646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.253677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.253881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.253911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.254100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.254130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.254320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.254350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.254606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.254637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.254765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.254795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.254999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.255031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.255170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.255200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.255367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.255399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.255651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.255689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.255943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.255973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.256113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.256143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.256328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.256359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.256542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.256573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.256829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.256860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.257113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.257143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.257333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.257363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.257574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.257604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.257857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.257887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.258110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.258139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.258417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.258448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.258631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.258660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.258789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.258819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.258961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.258991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.259180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.259209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.259495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.259525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.259717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.259746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.259876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.259904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.260107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.260137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.260343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.260374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.260578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.260609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.260796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.260827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.261080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.261110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.261337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.261367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.261594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.261624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.261812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.261842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.262074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.262137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.262284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.262326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.262535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.262565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.262771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.262802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.263009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.263039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.263244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.263275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.263529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.263559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.263760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.263789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.264002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.264031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.264218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.326 [2024-07-15 12:26:01.264258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.326 qpair failed and we were unable to recover it. 00:36:11.326 [2024-07-15 12:26:01.264535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.264565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.264773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.264803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.265060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.265090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.265387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.265417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.265636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.265667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.265972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.266002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.266217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.266258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.266445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.266475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.266624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.266653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.266803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.266832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.267026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.267056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.267310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.267340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.267490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.267520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.267740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.267770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.267966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.267996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.268188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.268217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.268480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.268510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.268724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.268759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.268898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.268927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.269135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.269166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.269322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.269353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.269548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.269580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.269787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.269822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.270081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.270114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.270313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.270345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.270553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.270584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.270836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.270866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.271135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.271164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.271293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.271325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.271578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.271608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.271746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.271776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.271929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.271959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.272213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.272253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.272515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.272545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.272835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.272866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.273052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.273081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.273239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.273269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.273420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.273450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.273704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.273737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.273875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.273906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.274164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.274195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.274451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.274500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.274761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.274791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.274995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.327 [2024-07-15 12:26:01.275026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.327 qpair failed and we were unable to recover it. 00:36:11.327 [2024-07-15 12:26:01.275235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.275268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.275493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.275523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.275789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.275818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.276092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.276122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.276297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.276329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.276606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.276636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.276786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.276816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.277001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.277031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.277255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.277286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.277462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.277493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.277685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.277715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.277985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.278014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.278150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.278182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.278492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.278530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.278733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.278764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.278903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.278933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.279187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.279217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.279505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.279538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.279738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.279771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.280038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.280072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.280290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.280325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.280450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.280482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.280703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.280738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.281018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.281052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.281184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.281215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.281420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.281454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.328 [2024-07-15 12:26:01.281685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.328 [2024-07-15 12:26:01.281716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.328 qpair failed and we were unable to recover it. 00:36:11.597 [2024-07-15 12:26:01.281940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.597 [2024-07-15 12:26:01.281991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.597 qpair failed and we were unable to recover it. 00:36:11.597 [2024-07-15 12:26:01.282135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.597 [2024-07-15 12:26:01.282164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.597 qpair failed and we were unable to recover it. 00:36:11.597 [2024-07-15 12:26:01.282309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.597 [2024-07-15 12:26:01.282340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.597 qpair failed and we were unable to recover it. 00:36:11.597 [2024-07-15 12:26:01.282537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.597 [2024-07-15 12:26:01.282566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.597 qpair failed and we were unable to recover it. 00:36:11.597 [2024-07-15 12:26:01.282729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.597 [2024-07-15 12:26:01.282758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.597 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.282887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.282917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.283121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.283150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.283380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.283411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.283590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.283620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.283779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.283808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.283954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.283983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.284179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.284208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.284371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.284401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.284592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.284628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.284828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.284858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.285151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.285180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.285342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.285372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.285600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.285630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.285769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.285798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.285990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.286019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.286168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.286197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.286461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.286495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.286628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.286657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.286911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.286941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.287159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.287188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.287452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.287482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.287735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.287765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.287905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.287935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.288156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.288186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.288450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.288480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.288660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.288691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.288969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.288998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.289182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.289212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.289438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.289468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.289587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.289616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.289868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.289897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.290100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.598 [2024-07-15 12:26:01.290129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.598 qpair failed and we were unable to recover it. 00:36:11.598 [2024-07-15 12:26:01.290381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.290412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.290538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.290568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.290841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.290870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.291122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.291156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.291412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.291442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.291664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.291693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.291910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.291939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.292141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.292170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.292471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.292502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.292719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.292748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.292888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.292918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.293195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.293230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.293453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.293483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.293768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.293797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.293945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.293974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.294239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.294270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.294567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.294596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.294732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.294762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.294982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.295011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.295135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.295164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.295294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.295324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.295529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.295558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.295692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.295721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.295851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.295880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.296068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.296097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.296291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.296320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.296466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.296496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.296715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.296744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.297020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.297049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.297253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.297283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.297401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.599 [2024-07-15 12:26:01.297436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.599 qpair failed and we were unable to recover it. 00:36:11.599 [2024-07-15 12:26:01.297617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.297647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.297859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.297888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.298073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.298103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.298307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.298338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.298474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.298504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.298791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.298821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.298953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.298983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.299185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.299214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.299504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.299534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.299656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.299685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.299875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.299903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.300078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.300107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.300307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.300337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.300575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.300617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.300878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.300908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.301056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.301085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.301287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.301318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.301521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.301550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.301803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.301833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.302043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.302073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.302280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.302311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.302518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.302547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.302820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.302850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.303040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.303069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.303270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.303301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.303506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.303536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.303734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.303770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.304005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.304035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.304312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.304342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.304597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.304627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.304825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.304855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.600 qpair failed and we were unable to recover it. 00:36:11.600 [2024-07-15 12:26:01.305059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.600 [2024-07-15 12:26:01.305088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.305346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.305376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.305572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.305601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.305805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.305834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.306044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.306073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.306345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.306375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.306509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.306538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.306735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.306764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.306967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.306996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.307197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.307253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.307461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.307490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.307689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.307718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.307993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.308023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.308206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.308247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.308459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.308488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.308743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.308772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.308975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.309004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.309207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.309245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.309462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.309491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.309687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.309716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.309919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.309948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.310192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.310221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.310545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.310614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.310871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.310919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.311156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.311186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.311347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.311379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.311634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.311664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.311818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.311848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.312105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.312134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.312320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.312351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.312608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.312638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.312780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.601 [2024-07-15 12:26:01.312809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.601 qpair failed and we were unable to recover it. 00:36:11.601 [2024-07-15 12:26:01.313027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.313056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.313269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.313299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.313505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.313535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.313735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.313771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.314023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.314052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.314257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.314288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.314516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.314546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.314682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.314712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.314848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.314878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.315133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.315163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.315370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.315401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.315611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.315641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.315752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.315782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.315971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.316001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.316206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.316246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.316475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.316506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.316763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.316792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.316996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.317027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.317246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.317276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.317470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.317499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.317755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.317785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.317959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.317988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.318245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.318276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.318536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.318566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.318788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.318817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.318955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.318985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.319123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.319153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.319288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.319318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.319604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.319633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.319836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.319866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.320082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.602 [2024-07-15 12:26:01.320122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.602 qpair failed and we were unable to recover it. 00:36:11.602 [2024-07-15 12:26:01.320255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.320288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.320429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.320459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.320672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.320701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.320953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.320983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.321111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.321141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.321341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.321372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.321627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.321658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.321844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.321874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.322059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.322088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.322237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.322268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.322408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.322438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.322703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.322732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.323006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.323044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.323247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.323278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.323558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.323587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.323864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.323894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.324098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.324127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.324312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.324343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.324609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.324639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.324922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.324952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.325236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.325266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.325572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.325602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.325789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.325820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.326019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.326048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.326304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.603 [2024-07-15 12:26:01.326335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.603 qpair failed and we were unable to recover it. 00:36:11.603 [2024-07-15 12:26:01.326525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.326555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.326852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.326882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.327037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.327067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.327346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.327376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.327523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.327553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:11.604 [2024-07-15 12:26:01.327742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.327772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:36:11.604 [2024-07-15 12:26:01.327988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.328018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.328235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.328267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:11.604 [2024-07-15 12:26:01.328483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.328513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:11.604 [2024-07-15 12:26:01.328688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.328717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.604 [2024-07-15 12:26:01.328918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.328948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.329136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.329166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.329371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.329408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.329602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.329632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.329833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.329863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.330061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.330091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.330293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.330324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.330624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.330654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.330790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.330819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.331008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.331039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.331245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.331275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.331476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.331505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.331730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.331762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.331963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.331993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.332198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.332234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.332509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.332539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.332752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.332782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.604 qpair failed and we were unable to recover it. 00:36:11.604 [2024-07-15 12:26:01.333025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.604 [2024-07-15 12:26:01.333056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.333352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.333382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.333568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.333598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.333804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.333834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.334013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.334042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.334304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.334335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.334533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.334563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.334766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.334796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.335006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.335036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.335167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.335196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.335326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.335358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.335593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.335623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.335766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.335796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.335988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.336019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.336221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.336259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.336542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.336572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.336708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.336738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.336892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.336922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.337076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.337105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.337317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.337347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.337485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.337514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.337662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.337691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.337837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.337867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.338090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.338120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.338325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.338358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.338512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.338547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.338697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.338727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.338925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.338955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.339102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.339131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.339288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.339320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.605 [2024-07-15 12:26:01.339451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.605 [2024-07-15 12:26:01.339481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.605 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.339683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.339715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.339925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.339955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.340159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.340191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.340371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.340403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.340596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.340626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.340899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.340929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.341067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.341096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.341352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.341383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.341587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.341617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.341828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.341857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.341992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.342022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.342156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.342186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.342335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.342365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.342571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.342600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.342745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.342775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.342902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.342931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.343244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.343274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.343415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.343445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.343704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.343734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.343859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.343888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.344091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.344122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.344253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.344285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.344538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.344568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.344725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.344754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.344942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.344971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.345157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.345187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.345341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.345371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.345629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.345659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.345856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.345886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.346109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.346140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.606 qpair failed and we were unable to recover it. 00:36:11.606 [2024-07-15 12:26:01.346341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.606 [2024-07-15 12:26:01.346373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.346509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.346538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.346676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.346705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.346906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.346935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.347212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.347255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.347514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.347543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.347678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.347709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.347844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.347873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.348026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.348055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.348200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.348238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.348434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.348464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.348620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.348649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.348851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.348880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.349002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.349032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.349165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.349194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.349410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.349440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.349638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.349668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.349880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.349910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.350133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.350164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.350309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.350347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.350480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.350511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.350731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.350761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.350952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.350982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.351186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.351217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.351445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.351475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.351617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.351646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.351843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.351873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.352009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.352038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.352177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.352208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.607 qpair failed and we were unable to recover it. 00:36:11.607 [2024-07-15 12:26:01.352365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.607 [2024-07-15 12:26:01.352395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.352655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.352684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.352821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.352851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.353056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.353086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.353291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.353321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.353465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.353496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.353616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.353645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.353779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.353809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.353930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.353959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.354101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.354131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.354327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.354359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.354485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.354514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.354630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.354661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.354942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.354971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.355101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.355131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.355275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.355312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.355444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.355474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.355628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.355658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.355914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.355943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.356176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.356206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.356475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.356505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.356654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.356684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.356816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.356846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.357042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.357072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.357352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.357382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.357520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.357550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.357690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.357721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.357842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.357872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.358097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.358126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.358341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.358375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.358584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.358615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.358762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.358791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.608 qpair failed and we were unable to recover it. 00:36:11.608 [2024-07-15 12:26:01.358920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.608 [2024-07-15 12:26:01.358949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.359075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.359105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.359240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.359270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.359465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.359495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.359723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.359752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.359876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.359906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.360157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.360186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.360322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.360352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.360483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.360513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.360638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.360668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.360794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.360826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.360959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.360988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.361176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.361206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.361417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.361447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.361584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.361613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.361740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.361769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.362006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.362035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.362164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.362194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.362351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.362383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.362507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.362536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.362763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.362792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:11.609 [2024-07-15 12:26:01.362923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.362957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 [2024-07-15 12:26:01.363087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.363116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.609 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:11.609 [2024-07-15 12:26:01.363382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.609 [2024-07-15 12:26:01.363415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.609 qpair failed and we were unable to recover it. 00:36:11.610 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.610 [2024-07-15 12:26:01.363615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.363650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.363840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.363870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.610 [2024-07-15 12:26:01.364001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.364031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.364159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.364189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.364327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.364357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.364545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.364575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.364761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.364791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.364994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.365023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.365252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.365283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.365415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.365445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.365641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.365671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.365824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.365856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.366045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.366073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.366274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.366305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.366439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.366469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.366725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.366755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.366877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.366906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.367041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.367069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.367271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.367302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.367428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.367457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.367737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.367766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.367906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.367936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.368060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.368089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.368298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.368328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.368573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.368619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.368826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.368857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.369039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.369068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.369205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.369247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.369436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.369465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.369673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.369703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.369823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.610 [2024-07-15 12:26:01.369853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.610 qpair failed and we were unable to recover it. 00:36:11.610 [2024-07-15 12:26:01.370068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.370098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.370244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.370275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.370465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.370494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.370683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.370713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.370850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.370879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.371126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.371155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.371369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.371400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.371585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.371615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.371750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.371779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.371922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.371952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.372214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.372259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.372402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.372432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.372628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.372657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.372787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.372816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.373011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.373041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.373239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.373269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.373473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.373503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.373689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.373718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.373975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.374004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.374194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.374223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.374374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.374420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.374557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.374587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.374794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.374824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.374965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.374994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.375190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.375220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.375443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.375473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.375685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.375715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.375902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.375932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.376168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.376198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.376351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.376418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.611 [2024-07-15 12:26:01.376657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.611 [2024-07-15 12:26:01.376704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.611 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.376858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.376888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.377095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.377125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.377330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.377363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.377516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.377546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.377681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.377710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.377907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.377937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.378057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.378087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.378371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.378405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.378629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.378661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.378796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.378827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.378966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.378996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.379186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.379216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.379419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.379449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.379638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.379670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.379855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.379886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.380145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.380174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.380386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.380424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.380638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.380668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.380886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.380915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.381125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.381154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.381348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.381380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.381570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.381600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.381912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.381942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.382135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.382164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.382314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.382343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 Malloc0 00:36:11.612 [2024-07-15 12:26:01.382547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.382577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.382710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.382739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.382865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.382894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.612 [2024-07-15 12:26:01.383016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.612 [2024-07-15 12:26:01.383045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.612 qpair failed and we were unable to recover it. 00:36:11.613 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.613 [2024-07-15 12:26:01.383239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.383275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.383404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.383433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:11.613 [2024-07-15 12:26:01.383619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.383649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.383791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.383821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.613 [2024-07-15 12:26:01.383940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.383970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.384093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.384123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.384261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.384291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.384491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.384520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.384656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.384685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.384829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.384860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.384992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.385022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.385167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.385197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.385331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.385362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.385490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.385521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.385728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.385758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.385943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.385973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.386189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.386219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.386362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.386392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.386532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.386563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.386689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.386718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.386844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.386873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.386992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.387021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.387152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.387180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.387315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.387345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.387474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.387504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.387691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.387720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.387932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.387966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.388102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.388131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.388415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.388445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.388735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.388764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.388967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.388996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.389115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.389145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.389350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.613 [2024-07-15 12:26:01.389380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.613 qpair failed and we were unable to recover it. 00:36:11.613 [2024-07-15 12:26:01.389598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.389627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.389773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.389802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.390037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:11.614 [2024-07-15 12:26:01.390109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.390137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.390266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.390296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.390558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.390587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.390865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.390895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa23c000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.391067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.391128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.391269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.391303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.391562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.391593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.391718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.391747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.391888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.391918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.392055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.392085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.392373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.392404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.392605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.392634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.392766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.392795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.392933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.392963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.393258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.393289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.393494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.393523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.393660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.393690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.393964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.394002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.394256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.394287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.394494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.394524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.394676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.394706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.614 qpair failed and we were unable to recover it. 00:36:11.614 [2024-07-15 12:26:01.394986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.614 [2024-07-15 12:26:01.395025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.395247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.395278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.395426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.395456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.615 [2024-07-15 12:26:01.395645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.395675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.395797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.395826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:11.615 [2024-07-15 12:26:01.396078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.396108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.615 [2024-07-15 12:26:01.396364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.396395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.396546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.396577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.615 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.396793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.396836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.397037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.397068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.397263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.397294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.397581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.397611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.397785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.397815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.398034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.398063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.398204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.398246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.398453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.398483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.398700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.398730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.398916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.398945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.399131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.399188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa244000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.399468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.399502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.399703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.399733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.399953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.399988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.400274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.400305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.400499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.400528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.400729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.400758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.400982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.401012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.401165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.401194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e3b60 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.401404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.401437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.401717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.401746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.401892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.401922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.402172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.402202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.402386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.402416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.402620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.402649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.402840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.402869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.403066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.615 [2024-07-15 12:26:01.403096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.615 qpair failed and we were unable to recover it. 00:36:11.615 [2024-07-15 12:26:01.403334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.403366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.403515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.403544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.403691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.403720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.403973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.404003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.404184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.404214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.404435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.404464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.404651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.404680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.404883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.404913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.405112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.405142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.405347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.405377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.405528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.405558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.405667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.405696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.405826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.405855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.406029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.406064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.406199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.406239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.406386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.406416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.406608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.406637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.406744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.406774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.406976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.407006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.407143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.407172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.407397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.407428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.407577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.407607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b9 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.616 0 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.407813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.407842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:11.616 [2024-07-15 12:26:01.408093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.408123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.616 [2024-07-15 12:26:01.408400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.408431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.616 [2024-07-15 12:26:01.408641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.408672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.408877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.408907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.409033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.409062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.409351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.616 [2024-07-15 12:26:01.409381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.616 qpair failed and we were unable to recover it. 00:36:11.616 [2024-07-15 12:26:01.409593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.409623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.409821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.409851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.409972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.410002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.410264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.410295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.410444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.410473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.410598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.410627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.410763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.410792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.410928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.410957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.411237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.411267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.411479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.411514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.411718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.411748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.411956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.411986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.412211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.412250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.412532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.412562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.412766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.412795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.413002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.413031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.413247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.413277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.413534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.413564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.413837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.413867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.414065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.414094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.414352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.414383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.414573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.414602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.414721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.414750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.414958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.414988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.415124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.415153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.415406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.415437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.415576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.415606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.617 [2024-07-15 12:26:01.415876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.415905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.617 [2024-07-15 12:26:01.416031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.416061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.617 [2024-07-15 12:26:01.416262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.617 [2024-07-15 12:26:01.416293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.617 qpair failed and we were unable to recover it. 00:36:11.618 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.618 [2024-07-15 12:26:01.416451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.416481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.618 [2024-07-15 12:26:01.416750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.416780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.416907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.416937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.417130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.417160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.417311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.417343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.417544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.417574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.417825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.417855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.417977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.418006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.418210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.418248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.418369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.418399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.418668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.418698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.418818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.418847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.419034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.618 [2024-07-15 12:26:01.419063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.419113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.618 [2024-07-15 12:26:01.420606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.618 [2024-07-15 12:26:01.420728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.618 [2024-07-15 12:26:01.420774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.618 [2024-07-15 12:26:01.420798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.618 [2024-07-15 12:26:01.420819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.618 [2024-07-15 12:26:01.420870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.618 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:11.618 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:11.618 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.618 [2024-07-15 12:26:01.430577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.618 [2024-07-15 12:26:01.430703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.618 [2024-07-15 12:26:01.430747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.618 [2024-07-15 12:26:01.430768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.618 [2024-07-15 12:26:01.430788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.618 [2024-07-15 12:26:01.430833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:11.618 12:26:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1367161 00:36:11.618 [2024-07-15 12:26:01.440585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.618 [2024-07-15 12:26:01.440674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.618 [2024-07-15 12:26:01.440703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.618 [2024-07-15 12:26:01.440718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.618 [2024-07-15 12:26:01.440737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.618 [2024-07-15 12:26:01.440767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.450543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.618 [2024-07-15 12:26:01.450623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.618 [2024-07-15 12:26:01.450644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.618 [2024-07-15 12:26:01.450654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.618 [2024-07-15 12:26:01.450662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.618 [2024-07-15 12:26:01.450682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.460543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.618 [2024-07-15 12:26:01.460604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.618 [2024-07-15 12:26:01.460620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.618 [2024-07-15 12:26:01.460626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.618 [2024-07-15 12:26:01.460632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.618 [2024-07-15 12:26:01.460647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.618 qpair failed and we were unable to recover it. 00:36:11.618 [2024-07-15 12:26:01.470577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.470636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.470652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.470658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.470664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.470678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.480613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.480671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.480687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.480693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.480699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.480714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.490574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.490633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.490648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.490654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.490660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.490674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.500633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.500690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.500704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.500711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.500717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.500731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.510668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.510755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.510769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.510778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.510784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.510798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.520705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.520779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.520794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.520801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.520807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.520821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.530744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.530801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.530816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.530822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.530828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.530842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.540780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.540838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.540853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.540860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.540865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.540880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.550799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.550856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.550871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.550878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.550883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.550898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.560815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.560874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.560888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.560895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.560901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.560914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.570841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.570899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.570913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.570919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.619 [2024-07-15 12:26:01.570925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.619 [2024-07-15 12:26:01.570938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.619 qpair failed and we were unable to recover it. 00:36:11.619 [2024-07-15 12:26:01.580855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.619 [2024-07-15 12:26:01.580919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.619 [2024-07-15 12:26:01.580934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.619 [2024-07-15 12:26:01.580940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.620 [2024-07-15 12:26:01.580946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.620 [2024-07-15 12:26:01.580960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.620 qpair failed and we were unable to recover it. 00:36:11.880 [2024-07-15 12:26:01.590899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.880 [2024-07-15 12:26:01.590961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.880 [2024-07-15 12:26:01.590976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.880 [2024-07-15 12:26:01.590983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.880 [2024-07-15 12:26:01.590989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.591003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.600953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.601028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.601045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.601051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.601057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.601071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.611011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.611070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.611085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.611091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.611097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.611111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.621014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.621075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.621089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.621096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.621101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.621115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.631039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.631097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.631111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.631118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.631123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.631137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.641084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.641141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.641155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.641161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.641167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.641183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.651086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.651146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.651160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.651167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.651172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.651186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.661117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.661178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.661192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.661198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.661204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.661217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.671104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.671166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.671180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.671187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.671192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.671206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.681226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.681308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.681322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.681328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.681334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.681348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.691232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.691289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.691307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.691313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.691319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.691334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.701242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.701300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.701315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.701321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.701327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.701341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.711272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.711328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.711343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.711350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.711356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.711371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.721301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.721377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.721392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.721398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.721404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.881 [2024-07-15 12:26:01.721418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.881 qpair failed and we were unable to recover it. 00:36:11.881 [2024-07-15 12:26:01.731324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.881 [2024-07-15 12:26:01.731386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.881 [2024-07-15 12:26:01.731400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.881 [2024-07-15 12:26:01.731407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.881 [2024-07-15 12:26:01.731415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.731429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.741338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.741400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.741414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.741421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.741426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.741440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.751378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.751448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.751462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.751469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.751474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.751488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.761415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.761473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.761488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.761494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.761500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.761514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.771413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.771489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.771503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.771509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.771514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.771528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.781448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.781510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.781525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.781531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.781537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.781551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.791411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.791471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.791485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.791492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.791498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.791511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.801513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.801570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.801584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.801590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.801596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.801610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.811558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.811661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.811676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.811682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.811688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.811702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.821611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.821681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.821696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.821705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.821710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.821725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.831594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.831652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.831667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.831673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.831679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.831693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.841624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.841678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.841692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.841699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.841705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.841718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.851654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.851708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.851722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.851729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.851734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.851748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.861673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.861733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.861747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.861753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.861759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.861773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:11.882 [2024-07-15 12:26:01.871746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.882 [2024-07-15 12:26:01.871832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.882 [2024-07-15 12:26:01.871846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.882 [2024-07-15 12:26:01.871852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.882 [2024-07-15 12:26:01.871858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:11.882 [2024-07-15 12:26:01.871872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.882 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.881793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.881852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.881867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.881873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.881879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.881893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.891782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.891842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.891857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.891863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.891869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.891882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.901791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.901855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.901869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.901875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.901880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.901894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.911836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.911895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.911909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.911918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.911924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.911938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.921859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.921914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.921928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.921934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.921940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.921954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.931924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.932007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.932021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.932027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.932033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.932046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.941838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.941903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.941917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.941923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.941929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.941943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.951933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.951989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.952003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.952010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.952015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.952029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.961973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.962035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.962048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.962054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.962060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.962074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.972011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.972072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.972086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.972093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.972098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.972112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.981950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.982016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.982030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.982036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.982042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.982056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:01.992013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:01.992072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:01.992086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:01.992093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:01.992098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:01.992112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:02.002092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:02.002152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:02.002169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:02.002175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:02.002181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:02.002196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:02.012127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:02.012222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:02.012241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:02.012248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:02.012253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:02.012266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:02.022074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:02.022156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:02.022171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:02.022177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:02.022183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:02.022197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:02.032151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:02.032207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:02.032221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:02.032233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:02.032239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:02.032253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:02.042165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:02.042228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:02.042243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:02.042249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:02.042255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:02.042274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:02.052182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.142 [2024-07-15 12:26:02.052246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.142 [2024-07-15 12:26:02.052261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.142 [2024-07-15 12:26:02.052267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.142 [2024-07-15 12:26:02.052273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.142 [2024-07-15 12:26:02.052287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.142 qpair failed and we were unable to recover it. 00:36:12.142 [2024-07-15 12:26:02.062254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.143 [2024-07-15 12:26:02.062320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.143 [2024-07-15 12:26:02.062334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.143 [2024-07-15 12:26:02.062340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.143 [2024-07-15 12:26:02.062346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.143 [2024-07-15 12:26:02.062360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-07-15 12:26:02.072257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.143 [2024-07-15 12:26:02.072320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.143 [2024-07-15 12:26:02.072336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.143 [2024-07-15 12:26:02.072342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.143 [2024-07-15 12:26:02.072348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.143 [2024-07-15 12:26:02.072363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-07-15 12:26:02.082234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.143 [2024-07-15 12:26:02.082292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.143 [2024-07-15 12:26:02.082306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.143 [2024-07-15 12:26:02.082313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.143 [2024-07-15 12:26:02.082319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.143 [2024-07-15 12:26:02.082333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-07-15 12:26:02.092290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.143 [2024-07-15 12:26:02.092352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.143 [2024-07-15 12:26:02.092369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.143 [2024-07-15 12:26:02.092375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.143 [2024-07-15 12:26:02.092381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.143 [2024-07-15 12:26:02.092395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-07-15 12:26:02.102389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.143 [2024-07-15 12:26:02.102475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.143 [2024-07-15 12:26:02.102489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.143 [2024-07-15 12:26:02.102495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.143 [2024-07-15 12:26:02.102501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.143 [2024-07-15 12:26:02.102514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-07-15 12:26:02.112369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.143 [2024-07-15 12:26:02.112429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.143 [2024-07-15 12:26:02.112443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.143 [2024-07-15 12:26:02.112450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.143 [2024-07-15 12:26:02.112455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.143 [2024-07-15 12:26:02.112469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-07-15 12:26:02.122441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.143 [2024-07-15 12:26:02.122499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.143 [2024-07-15 12:26:02.122514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.143 [2024-07-15 12:26:02.122520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.143 [2024-07-15 12:26:02.122526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.143 [2024-07-15 12:26:02.122541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.143 [2024-07-15 12:26:02.132448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.143 [2024-07-15 12:26:02.132522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.143 [2024-07-15 12:26:02.132536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.143 [2024-07-15 12:26:02.132542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.143 [2024-07-15 12:26:02.132552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.143 [2024-07-15 12:26:02.132566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.143 qpair failed and we were unable to recover it. 00:36:12.402 [2024-07-15 12:26:02.142418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.402 [2024-07-15 12:26:02.142481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.402 [2024-07-15 12:26:02.142496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.402 [2024-07-15 12:26:02.142502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.402 [2024-07-15 12:26:02.142508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.402 [2024-07-15 12:26:02.142522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.402 qpair failed and we were unable to recover it. 00:36:12.402 [2024-07-15 12:26:02.152446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.402 [2024-07-15 12:26:02.152506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.402 [2024-07-15 12:26:02.152521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.402 [2024-07-15 12:26:02.152527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.402 [2024-07-15 12:26:02.152533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.402 [2024-07-15 12:26:02.152547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.402 qpair failed and we were unable to recover it. 00:36:12.402 [2024-07-15 12:26:02.162547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.402 [2024-07-15 12:26:02.162607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.402 [2024-07-15 12:26:02.162621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.402 [2024-07-15 12:26:02.162627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.402 [2024-07-15 12:26:02.162633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.402 [2024-07-15 12:26:02.162647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.402 qpair failed and we were unable to recover it. 00:36:12.402 [2024-07-15 12:26:02.172561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.402 [2024-07-15 12:26:02.172619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.402 [2024-07-15 12:26:02.172633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.402 [2024-07-15 12:26:02.172639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.402 [2024-07-15 12:26:02.172644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.402 [2024-07-15 12:26:02.172658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.402 qpair failed and we were unable to recover it. 00:36:12.402 [2024-07-15 12:26:02.182590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.402 [2024-07-15 12:26:02.182655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.402 [2024-07-15 12:26:02.182669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.402 [2024-07-15 12:26:02.182675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.402 [2024-07-15 12:26:02.182681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.402 [2024-07-15 12:26:02.182695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.402 qpair failed and we were unable to recover it. 00:36:12.402 [2024-07-15 12:26:02.192581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.402 [2024-07-15 12:26:02.192655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.402 [2024-07-15 12:26:02.192670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.402 [2024-07-15 12:26:02.192676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.402 [2024-07-15 12:26:02.192681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.402 [2024-07-15 12:26:02.192695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.402 qpair failed and we were unable to recover it. 00:36:12.402 [2024-07-15 12:26:02.202638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.402 [2024-07-15 12:26:02.202697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.402 [2024-07-15 12:26:02.202711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.402 [2024-07-15 12:26:02.202717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.402 [2024-07-15 12:26:02.202723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.402 [2024-07-15 12:26:02.202737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.402 qpair failed and we were unable to recover it. 00:36:12.402 [2024-07-15 12:26:02.212706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.402 [2024-07-15 12:26:02.212766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.402 [2024-07-15 12:26:02.212781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.402 [2024-07-15 12:26:02.212787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.402 [2024-07-15 12:26:02.212793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.402 [2024-07-15 12:26:02.212806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.402 qpair failed and we were unable to recover it. 00:36:12.402 [2024-07-15 12:26:02.222663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.402 [2024-07-15 12:26:02.222718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.402 [2024-07-15 12:26:02.222732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.402 [2024-07-15 12:26:02.222739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.402 [2024-07-15 12:26:02.222747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.402 [2024-07-15 12:26:02.222762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.402 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.232910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.232977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.232991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.232997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.233003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.233018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.242797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.242859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.242872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.242879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.242884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.242898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.252844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.252904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.252918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.252924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.252930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.252943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.262841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.262906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.262920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.262926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.262932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.262946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.272853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.272911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.272925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.272931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.272937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.272951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.282853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.282938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.282952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.282959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.282964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.282978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.292846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.292902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.292916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.292923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.292928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.292942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.302921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.302981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.302995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.303001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.303007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.303021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.312882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.312948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.312962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.312972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.312978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.312992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.322989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.323049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.323063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.323069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.323075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.323088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.333002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.333059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.333074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.333081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.333086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.333100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.343055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.343112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.343126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.343134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.343140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.343154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.353080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.353135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.353151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.353157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.353163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.353177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.363091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.363153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.403 [2024-07-15 12:26:02.363168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.403 [2024-07-15 12:26:02.363174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.403 [2024-07-15 12:26:02.363180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.403 [2024-07-15 12:26:02.363194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.403 qpair failed and we were unable to recover it. 00:36:12.403 [2024-07-15 12:26:02.373155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.403 [2024-07-15 12:26:02.373229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.404 [2024-07-15 12:26:02.373244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.404 [2024-07-15 12:26:02.373250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.404 [2024-07-15 12:26:02.373256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.404 [2024-07-15 12:26:02.373270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.404 qpair failed and we were unable to recover it. 00:36:12.404 [2024-07-15 12:26:02.383103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.404 [2024-07-15 12:26:02.383162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.404 [2024-07-15 12:26:02.383177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.404 [2024-07-15 12:26:02.383183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.404 [2024-07-15 12:26:02.383189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.404 [2024-07-15 12:26:02.383202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.404 qpair failed and we were unable to recover it. 00:36:12.404 [2024-07-15 12:26:02.393198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.404 [2024-07-15 12:26:02.393282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.404 [2024-07-15 12:26:02.393297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.404 [2024-07-15 12:26:02.393303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.404 [2024-07-15 12:26:02.393309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.404 [2024-07-15 12:26:02.393324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.404 qpair failed and we were unable to recover it. 00:36:12.662 [2024-07-15 12:26:02.403229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.662 [2024-07-15 12:26:02.403288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.662 [2024-07-15 12:26:02.403306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.662 [2024-07-15 12:26:02.403313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.662 [2024-07-15 12:26:02.403318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.662 [2024-07-15 12:26:02.403332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.662 qpair failed and we were unable to recover it. 00:36:12.662 [2024-07-15 12:26:02.413286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.662 [2024-07-15 12:26:02.413353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.662 [2024-07-15 12:26:02.413368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.662 [2024-07-15 12:26:02.413375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.662 [2024-07-15 12:26:02.413381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.662 [2024-07-15 12:26:02.413395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.662 qpair failed and we were unable to recover it. 00:36:12.662 [2024-07-15 12:26:02.423280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.662 [2024-07-15 12:26:02.423339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.662 [2024-07-15 12:26:02.423354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.662 [2024-07-15 12:26:02.423360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.423366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.423381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.433374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.433433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.433447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.433453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.433459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.433473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.443346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.443400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.443414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.443421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.443427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.443443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.453379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.453457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.453471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.453477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.453483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.453497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.463429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.463489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.463503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.463510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.463515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.463529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.473436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.473492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.473506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.473513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.473518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.473532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.483472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.483529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.483544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.483550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.483556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.483569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.493488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.493549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.493566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.493572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.493578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.493591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.503546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.503605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.503620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.503626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.503632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.503645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.513599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.513661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.513675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.513681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.513687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.513701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.523523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.523618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.523632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.523638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.523645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.523659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.533621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.533689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.533704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.533710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.533719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.533733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.543636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.543699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.543713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.543720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.543725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.543739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.553675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.553744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.553759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.553765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.553771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.553784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.563734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.563792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.563806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.563812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.563818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.563832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.573739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.573805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.573818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.573824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.573830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.573844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.583738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.583803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.583817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.583823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.583829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.583842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.593777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.593836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.593850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.593857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.593862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.593876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.603805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.603860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.603874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.603881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.603886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.603900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.613844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.663 [2024-07-15 12:26:02.613900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.663 [2024-07-15 12:26:02.613915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.663 [2024-07-15 12:26:02.613921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.663 [2024-07-15 12:26:02.613927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.663 [2024-07-15 12:26:02.613941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.663 qpair failed and we were unable to recover it. 00:36:12.663 [2024-07-15 12:26:02.623868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.664 [2024-07-15 12:26:02.623924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.664 [2024-07-15 12:26:02.623938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.664 [2024-07-15 12:26:02.623944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.664 [2024-07-15 12:26:02.623955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.664 [2024-07-15 12:26:02.623969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.664 qpair failed and we were unable to recover it. 00:36:12.664 [2024-07-15 12:26:02.633892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.664 [2024-07-15 12:26:02.633952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.664 [2024-07-15 12:26:02.633966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.664 [2024-07-15 12:26:02.633972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.664 [2024-07-15 12:26:02.633978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.664 [2024-07-15 12:26:02.633991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.664 qpair failed and we were unable to recover it. 00:36:12.664 [2024-07-15 12:26:02.643931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.664 [2024-07-15 12:26:02.643988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.664 [2024-07-15 12:26:02.644002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.664 [2024-07-15 12:26:02.644009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.664 [2024-07-15 12:26:02.644015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.664 [2024-07-15 12:26:02.644029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.664 qpair failed and we were unable to recover it. 00:36:12.664 [2024-07-15 12:26:02.653960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.664 [2024-07-15 12:26:02.654024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.664 [2024-07-15 12:26:02.654039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.664 [2024-07-15 12:26:02.654045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.664 [2024-07-15 12:26:02.654051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.664 [2024-07-15 12:26:02.654065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.664 qpair failed and we were unable to recover it. 00:36:12.922 [2024-07-15 12:26:02.663959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.922 [2024-07-15 12:26:02.664023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.922 [2024-07-15 12:26:02.664038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.922 [2024-07-15 12:26:02.664044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.922 [2024-07-15 12:26:02.664050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.922 [2024-07-15 12:26:02.664063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.922 qpair failed and we were unable to recover it. 00:36:12.922 [2024-07-15 12:26:02.674029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.922 [2024-07-15 12:26:02.674080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.922 [2024-07-15 12:26:02.674095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.922 [2024-07-15 12:26:02.674101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.922 [2024-07-15 12:26:02.674107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.922 [2024-07-15 12:26:02.674120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.922 qpair failed and we were unable to recover it. 00:36:12.922 [2024-07-15 12:26:02.684020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.922 [2024-07-15 12:26:02.684073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.922 [2024-07-15 12:26:02.684087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.684094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.684100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.684114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.694078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.694137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.694151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.694157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.694163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.694177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.704100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.704162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.704177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.704183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.704189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.704203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.714193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.714249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.714264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.714273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.714279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.714293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.724137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.724192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.724206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.724213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.724219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.724236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.734185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.734252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.734266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.734273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.734278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.734292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.744220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.744284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.744299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.744305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.744311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.744325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.754283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.754365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.754379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.754385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.754391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.754405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.764270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.764324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.764339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.764346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.764351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.764365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.774333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.774407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.774422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.774428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.774434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.774448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.784328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.784388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.784402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.784408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.784413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.784427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.794360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.794414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.794428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.794435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.794440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.794454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.804441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.804497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.804514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.804520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.804525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.804539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.814472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.814578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.814593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.814599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.814605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.814619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.824453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.824510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.824524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.824530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.824536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.824550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.834482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.834542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.834556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.834562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.834568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.834582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.844500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.844572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.844586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.844592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.844598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.844614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.854544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.854602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.854616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.854622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.854628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.854642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.864546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.864607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.864621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.864627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.864633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.864647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.874598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.874654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.874668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.874675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.874680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.874694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.884630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.884690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.884704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.884710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.884716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.884729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.894671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.894752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.894769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.894775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.894781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.894794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.904679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.904735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.904749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.904755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.904761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.904775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:12.923 [2024-07-15 12:26:02.914701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.923 [2024-07-15 12:26:02.914766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.923 [2024-07-15 12:26:02.914780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.923 [2024-07-15 12:26:02.914786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.923 [2024-07-15 12:26:02.914792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:12.923 [2024-07-15 12:26:02.914805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.923 qpair failed and we were unable to recover it. 00:36:13.182 [2024-07-15 12:26:02.924773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.182 [2024-07-15 12:26:02.924831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.182 [2024-07-15 12:26:02.924846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.182 [2024-07-15 12:26:02.924852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.182 [2024-07-15 12:26:02.924858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.182 [2024-07-15 12:26:02.924872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-07-15 12:26:02.934775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.182 [2024-07-15 12:26:02.934835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.182 [2024-07-15 12:26:02.934849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.182 [2024-07-15 12:26:02.934856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.182 [2024-07-15 12:26:02.934861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.182 [2024-07-15 12:26:02.934878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.182 qpair failed and we were unable to recover it. 00:36:13.182 [2024-07-15 12:26:02.944786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.182 [2024-07-15 12:26:02.944840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.182 [2024-07-15 12:26:02.944854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.182 [2024-07-15 12:26:02.944861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.182 [2024-07-15 12:26:02.944866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:02.944880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:02.954817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:02.954872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:02.954886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:02.954893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:02.954898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:02.954913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:02.964826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:02.964886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:02.964901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:02.964907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:02.964913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:02.964927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:02.974890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:02.974947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:02.974962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:02.974968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:02.974974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:02.974988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:02.984909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:02.984973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:02.984987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:02.984994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:02.985000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:02.985013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:02.994958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:02.995014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:02.995028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:02.995035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:02.995040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:02.995054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:03.004959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:03.005017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:03.005032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:03.005039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:03.005045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:03.005058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:03.014997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:03.015052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:03.015066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:03.015073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:03.015079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:03.015093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:03.025000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:03.025059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:03.025075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:03.025081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:03.025090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:03.025104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:03.035040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:03.035100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:03.035115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:03.035121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:03.035126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:03.035140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:03.045083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:03.045140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:03.045155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:03.045161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.183 [2024-07-15 12:26:03.045167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.183 [2024-07-15 12:26:03.045181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.183 qpair failed and we were unable to recover it. 00:36:13.183 [2024-07-15 12:26:03.055131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.183 [2024-07-15 12:26:03.055218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.183 [2024-07-15 12:26:03.055236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.183 [2024-07-15 12:26:03.055242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.055248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.055262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.065172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.065263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.065277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.065283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.065289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.065303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.075165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.075222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.075241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.075247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.075253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.075267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.085182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.085239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.085253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.085259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.085265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.085279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.095220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.095280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.095294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.095300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.095306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.095320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.105249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.105309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.105324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.105331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.105337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.105351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.115287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.115347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.115361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.115371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.115376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.115390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.125305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.125364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.125379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.125386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.125392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.125406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.135379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.135435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.135450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.135456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.135462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.135477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.145336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.145395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.145410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.145416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.145422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.145436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.155398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.155459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.155474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.155480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.155486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.155500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.184 qpair failed and we were unable to recover it. 00:36:13.184 [2024-07-15 12:26:03.165458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.184 [2024-07-15 12:26:03.165517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.184 [2024-07-15 12:26:03.165532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.184 [2024-07-15 12:26:03.165538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.184 [2024-07-15 12:26:03.165544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.184 [2024-07-15 12:26:03.165558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.185 [2024-07-15 12:26:03.175461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.185 [2024-07-15 12:26:03.175518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.185 [2024-07-15 12:26:03.175532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.185 [2024-07-15 12:26:03.175539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.185 [2024-07-15 12:26:03.175545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.185 [2024-07-15 12:26:03.175559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.185 qpair failed and we were unable to recover it. 00:36:13.442 [2024-07-15 12:26:03.185500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.442 [2024-07-15 12:26:03.185585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.442 [2024-07-15 12:26:03.185599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.442 [2024-07-15 12:26:03.185605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.442 [2024-07-15 12:26:03.185611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.442 [2024-07-15 12:26:03.185625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.442 qpair failed and we were unable to recover it. 00:36:13.442 [2024-07-15 12:26:03.195517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.442 [2024-07-15 12:26:03.195575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.442 [2024-07-15 12:26:03.195589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.442 [2024-07-15 12:26:03.195595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.442 [2024-07-15 12:26:03.195601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.442 [2024-07-15 12:26:03.195614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.442 qpair failed and we were unable to recover it. 00:36:13.442 [2024-07-15 12:26:03.205562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.442 [2024-07-15 12:26:03.205623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.442 [2024-07-15 12:26:03.205637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.442 [2024-07-15 12:26:03.205646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.442 [2024-07-15 12:26:03.205652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.442 [2024-07-15 12:26:03.205666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.442 qpair failed and we were unable to recover it. 00:36:13.442 [2024-07-15 12:26:03.215581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.442 [2024-07-15 12:26:03.215639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.442 [2024-07-15 12:26:03.215653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.442 [2024-07-15 12:26:03.215659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.442 [2024-07-15 12:26:03.215665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.442 [2024-07-15 12:26:03.215679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.442 qpair failed and we were unable to recover it. 00:36:13.442 [2024-07-15 12:26:03.225623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.442 [2024-07-15 12:26:03.225680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.225694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.225701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.225706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.225720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.235635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.235696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.235710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.235716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.235722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.235736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.245592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.245655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.245669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.245675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.245681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.245695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.255641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.255699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.255713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.255720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.255726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.255740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.265719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.265776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.265790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.265796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.265802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.265816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.275767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.275833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.275847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.275854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.275859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.275874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.285771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.285868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.285882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.285889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.285895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.285909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.295842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.295947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.295970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.295977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.295982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.295996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.305866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.305977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.305996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.306002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.306008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.306022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.315851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.315906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.315920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.315927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.315932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.315945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.325879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.325933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.325947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.325953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.325959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.325973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.335912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.335973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.335987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.335994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.335999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.336019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.345932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.345991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.346006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.346012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.346018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.346032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.355972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.356031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.356045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.356051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.356057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.356071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.366012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.366069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.366083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.366090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.366096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.366110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.376030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.376090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.376104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.376110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.376116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.376130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.386047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.386110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.386128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.386136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.386141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.386156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.396085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.396145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.396160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.396166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.396172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.396186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.406109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.406168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.406183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.406189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.406195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.406209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.416183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.416244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.416259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.416265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.416271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.416284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.426172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.426246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.426260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.426267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.426275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.426289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.443 [2024-07-15 12:26:03.436188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.443 [2024-07-15 12:26:03.436288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.443 [2024-07-15 12:26:03.436302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.443 [2024-07-15 12:26:03.436309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.443 [2024-07-15 12:26:03.436315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.443 [2024-07-15 12:26:03.436331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.443 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.446219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.446285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.446300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.446306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.446311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.446325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.456245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.456307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.456322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.456328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.456334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.456348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.466262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.466323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.466337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.466344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.466350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.466363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.476318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.476382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.476397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.476403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.476409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.476423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.486343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.486401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.486416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.486422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.486427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.486441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.496403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.496471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.496485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.496492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.496497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.496512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.506375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.506438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.506452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.506458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.506464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.506478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.516387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.516441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.516455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.516467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.516472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.516486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.526447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.526506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.526520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.526527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.526532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.526546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.536503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.536562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.536576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.536583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.536588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.536602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.546500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.546559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.546573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.546579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.546585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.546599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.702 [2024-07-15 12:26:03.556542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.702 [2024-07-15 12:26:03.556598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.702 [2024-07-15 12:26:03.556612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.702 [2024-07-15 12:26:03.556619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.702 [2024-07-15 12:26:03.556624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.702 [2024-07-15 12:26:03.556638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.702 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.566569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.566636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.566650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.566656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.566662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.566675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.576594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.576652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.576667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.576673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.576679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.576693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.586570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.586630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.586645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.586652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.586657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.586672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.596595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.596797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.596813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.596820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.596826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.596840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.606718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.606822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.606841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.606851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.606857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.606871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.616671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.616758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.616773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.616780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.616786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.616800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.626761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.626822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.626836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.626843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.626849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.626863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.636803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.636864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.636880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.636887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.636893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.636907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.646854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.646908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.646923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.646929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.646935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.646948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.656843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.656898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.656913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.656919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.656925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.656939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.666881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.666944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.666958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.666965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.666971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.666985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.676880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.676938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.676952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.676958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.676964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.676977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.686924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.687012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.687026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.687032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.687038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.687052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.703 [2024-07-15 12:26:03.696968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.703 [2024-07-15 12:26:03.697032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.703 [2024-07-15 12:26:03.697049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.703 [2024-07-15 12:26:03.697055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.703 [2024-07-15 12:26:03.697061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.703 [2024-07-15 12:26:03.697075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.703 qpair failed and we were unable to recover it. 00:36:13.962 [2024-07-15 12:26:03.706910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.962 [2024-07-15 12:26:03.706968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.962 [2024-07-15 12:26:03.706982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.962 [2024-07-15 12:26:03.706989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.962 [2024-07-15 12:26:03.706995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.962 [2024-07-15 12:26:03.707009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-07-15 12:26:03.717018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.962 [2024-07-15 12:26:03.717074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.962 [2024-07-15 12:26:03.717088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.962 [2024-07-15 12:26:03.717095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.962 [2024-07-15 12:26:03.717100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.962 [2024-07-15 12:26:03.717114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-07-15 12:26:03.727025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.962 [2024-07-15 12:26:03.727082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.962 [2024-07-15 12:26:03.727096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.962 [2024-07-15 12:26:03.727103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.962 [2024-07-15 12:26:03.727108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.962 [2024-07-15 12:26:03.727122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-07-15 12:26:03.737012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.962 [2024-07-15 12:26:03.737118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.962 [2024-07-15 12:26:03.737133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.962 [2024-07-15 12:26:03.737139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.962 [2024-07-15 12:26:03.737145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.962 [2024-07-15 12:26:03.737163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.962 qpair failed and we were unable to recover it. 00:36:13.962 [2024-07-15 12:26:03.747116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.962 [2024-07-15 12:26:03.747192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.962 [2024-07-15 12:26:03.747207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.962 [2024-07-15 12:26:03.747213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.962 [2024-07-15 12:26:03.747219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.747237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.757151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.757235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.757250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.757256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.757262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.757275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.767162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.767218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.767237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.767244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.767250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.767264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.777255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.777362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.777376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.777382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.777388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.777402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.787210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.787269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.787286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.787293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.787299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.787312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.797220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.797306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.797321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.797327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.797333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.797347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.807307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.807364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.807378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.807384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.807390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.807404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.817281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.817343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.817357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.817364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.817369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.817383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.827328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.827387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.827401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.827407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.827416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.827430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.837350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.837452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.837466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.837473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.837479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.837493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.847397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.847489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.847503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.847509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.847516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.847529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.857439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.857493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.857507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.857514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.857520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.857533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.867436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.867497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.867511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.867518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.867523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.867537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.877529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.877588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.877602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.877608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.877613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.877627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.887498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.887555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.887570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.887576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.887582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.887595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.897528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.897589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.897603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.897609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.897615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.897629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.907558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.907620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.907634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.907641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.907646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.907660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.917577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.917634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.917648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.917655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.917663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.917677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.927617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.927677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.927692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.927698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.927704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.927717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.937658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.937718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.937731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.937737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.937743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.937757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.947666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.947727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.947741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.963 [2024-07-15 12:26:03.947748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.963 [2024-07-15 12:26:03.947754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.963 [2024-07-15 12:26:03.947767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.963 qpair failed and we were unable to recover it. 00:36:13.963 [2024-07-15 12:26:03.957691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.963 [2024-07-15 12:26:03.957751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.963 [2024-07-15 12:26:03.957765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.964 [2024-07-15 12:26:03.957771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.964 [2024-07-15 12:26:03.957777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:13.964 [2024-07-15 12:26:03.957791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.964 qpair failed and we were unable to recover it. 00:36:14.222 [2024-07-15 12:26:03.967727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.222 [2024-07-15 12:26:03.967786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.222 [2024-07-15 12:26:03.967800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.222 [2024-07-15 12:26:03.967806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.222 [2024-07-15 12:26:03.967812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:03.967826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:03.977823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:03.977883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:03.977897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:03.977904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:03.977909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:03.977923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:03.987783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:03.987843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:03.987858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:03.987864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:03.987870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:03.987884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:03.997806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:03.997863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:03.997877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:03.997883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:03.997889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:03.997902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.007842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.007901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.007915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.007924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.007930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.007944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.017847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.017909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.017923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.017929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.017935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.017949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.027900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.027957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.027971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.027978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.027983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.027997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.037933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.037990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.038003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.038009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.038015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.038028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.047958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.048016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.048031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.048037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.048043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.048056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.057996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.058055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.058070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.058076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.058082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.058095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.068009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.068072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.068086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.068092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.068098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.068112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.078109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.078195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.078210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.078217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.078223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.078242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.088079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.088141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.088155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.088161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.088167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.088181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.098115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.098176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.098195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.098201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.098206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.098221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.108124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.108182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.108196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.108203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.108209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.108222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.118163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.118220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.118237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.118244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.118250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.118264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.128197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.128258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.128273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.128280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.128286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.128300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.138219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.138282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.138296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.138302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.138308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.138324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.148249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.148310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.148324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.148330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.148336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.148350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.158278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.158371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.158385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.158391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.158397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.158412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.168299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.168357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.168371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.168378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.168383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.168397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.178335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.178390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.178404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.178411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.178416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.178430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.188352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.223 [2024-07-15 12:26:04.188413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.223 [2024-07-15 12:26:04.188431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.223 [2024-07-15 12:26:04.188437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.223 [2024-07-15 12:26:04.188443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.223 [2024-07-15 12:26:04.188457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.223 qpair failed and we were unable to recover it. 00:36:14.223 [2024-07-15 12:26:04.198385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.224 [2024-07-15 12:26:04.198446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.224 [2024-07-15 12:26:04.198460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.224 [2024-07-15 12:26:04.198466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.224 [2024-07-15 12:26:04.198472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.224 [2024-07-15 12:26:04.198486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.224 qpair failed and we were unable to recover it. 00:36:14.224 [2024-07-15 12:26:04.208414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.224 [2024-07-15 12:26:04.208480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.224 [2024-07-15 12:26:04.208494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.224 [2024-07-15 12:26:04.208500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.224 [2024-07-15 12:26:04.208506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.224 [2024-07-15 12:26:04.208519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.224 qpair failed and we were unable to recover it. 00:36:14.224 [2024-07-15 12:26:04.218435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.224 [2024-07-15 12:26:04.218493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.224 [2024-07-15 12:26:04.218508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.224 [2024-07-15 12:26:04.218514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.224 [2024-07-15 12:26:04.218519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.224 [2024-07-15 12:26:04.218533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.224 qpair failed and we were unable to recover it. 00:36:14.481 [2024-07-15 12:26:04.228479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.481 [2024-07-15 12:26:04.228561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.481 [2024-07-15 12:26:04.228575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.481 [2024-07-15 12:26:04.228582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.481 [2024-07-15 12:26:04.228590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.481 [2024-07-15 12:26:04.228604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.481 qpair failed and we were unable to recover it. 00:36:14.481 [2024-07-15 12:26:04.238507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.481 [2024-07-15 12:26:04.238566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.481 [2024-07-15 12:26:04.238580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.481 [2024-07-15 12:26:04.238586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.481 [2024-07-15 12:26:04.238592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.481 [2024-07-15 12:26:04.238606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.481 qpair failed and we were unable to recover it. 00:36:14.481 [2024-07-15 12:26:04.248563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.481 [2024-07-15 12:26:04.248634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.481 [2024-07-15 12:26:04.248648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.248654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.248660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.248674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.258601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.258660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.258673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.258679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.258685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.258699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.268628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.268688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.268702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.268709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.268714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.268728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.278686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.278750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.278764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.278770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.278776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.278790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.288662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.288721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.288735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.288741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.288747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.288761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.298699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.298804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.298818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.298824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.298830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.298844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.308706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.308766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.308781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.308787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.308793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.308806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.318776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.318834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.318848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.318854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.318863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.318876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.328738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.328792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.328806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.328813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.328818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.328832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.338863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.338959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.338973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.338979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.338985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.338999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.348819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.348876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.348890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.348896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.348902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.348915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.358846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.358899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.358912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.358919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.358924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.358938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.368882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.368938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.368952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.368958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.368964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.368977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.378939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.379041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.379057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.379063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.379069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.379084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.388925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.388981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.388995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.482 [2024-07-15 12:26:04.389002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.482 [2024-07-15 12:26:04.389007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.482 [2024-07-15 12:26:04.389021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.482 qpair failed and we were unable to recover it. 00:36:14.482 [2024-07-15 12:26:04.399032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.482 [2024-07-15 12:26:04.399108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.482 [2024-07-15 12:26:04.399123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.483 [2024-07-15 12:26:04.399129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.483 [2024-07-15 12:26:04.399135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.483 [2024-07-15 12:26:04.399151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.483 qpair failed and we were unable to recover it. 00:36:14.483 [2024-07-15 12:26:04.409029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.483 [2024-07-15 12:26:04.409080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.483 [2024-07-15 12:26:04.409094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.483 [2024-07-15 12:26:04.409103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.483 [2024-07-15 12:26:04.409109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.483 [2024-07-15 12:26:04.409123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.483 qpair failed and we were unable to recover it. 00:36:14.483 [2024-07-15 12:26:04.419025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.483 [2024-07-15 12:26:04.419088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.483 [2024-07-15 12:26:04.419102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.483 [2024-07-15 12:26:04.419108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.483 [2024-07-15 12:26:04.419114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.483 [2024-07-15 12:26:04.419128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.483 qpair failed and we were unable to recover it. 00:36:14.483 [2024-07-15 12:26:04.429051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.483 [2024-07-15 12:26:04.429131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.483 [2024-07-15 12:26:04.429145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.483 [2024-07-15 12:26:04.429152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.483 [2024-07-15 12:26:04.429157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.483 [2024-07-15 12:26:04.429172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.483 qpair failed and we were unable to recover it. 00:36:14.483 [2024-07-15 12:26:04.439075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.483 [2024-07-15 12:26:04.439148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.483 [2024-07-15 12:26:04.439162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.483 [2024-07-15 12:26:04.439169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.483 [2024-07-15 12:26:04.439175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.483 [2024-07-15 12:26:04.439189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.483 qpair failed and we were unable to recover it. 00:36:14.483 [2024-07-15 12:26:04.449097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.483 [2024-07-15 12:26:04.449164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.483 [2024-07-15 12:26:04.449179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.483 [2024-07-15 12:26:04.449186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.483 [2024-07-15 12:26:04.449192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.483 [2024-07-15 12:26:04.449205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.483 qpair failed and we were unable to recover it. 00:36:14.483 [2024-07-15 12:26:04.459139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.483 [2024-07-15 12:26:04.459198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.483 [2024-07-15 12:26:04.459212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.483 [2024-07-15 12:26:04.459219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.483 [2024-07-15 12:26:04.459227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.483 [2024-07-15 12:26:04.459242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.483 qpair failed and we were unable to recover it. 00:36:14.483 [2024-07-15 12:26:04.469156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.483 [2024-07-15 12:26:04.469216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.483 [2024-07-15 12:26:04.469233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.483 [2024-07-15 12:26:04.469240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.483 [2024-07-15 12:26:04.469245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.483 [2024-07-15 12:26:04.469259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.483 qpair failed and we were unable to recover it. 00:36:14.483 [2024-07-15 12:26:04.479191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.483 [2024-07-15 12:26:04.479250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.483 [2024-07-15 12:26:04.479265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.483 [2024-07-15 12:26:04.479271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.483 [2024-07-15 12:26:04.479277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.483 [2024-07-15 12:26:04.479291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.483 qpair failed and we were unable to recover it. 00:36:14.741 [2024-07-15 12:26:04.489213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.741 [2024-07-15 12:26:04.489291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.741 [2024-07-15 12:26:04.489305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.741 [2024-07-15 12:26:04.489312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.741 [2024-07-15 12:26:04.489317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.741 [2024-07-15 12:26:04.489331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.741 qpair failed and we were unable to recover it. 00:36:14.741 [2024-07-15 12:26:04.499267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.741 [2024-07-15 12:26:04.499331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.741 [2024-07-15 12:26:04.499349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.741 [2024-07-15 12:26:04.499355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.741 [2024-07-15 12:26:04.499361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.741 [2024-07-15 12:26:04.499374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.741 qpair failed and we were unable to recover it. 00:36:14.741 [2024-07-15 12:26:04.509294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.741 [2024-07-15 12:26:04.509367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.741 [2024-07-15 12:26:04.509381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.741 [2024-07-15 12:26:04.509387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.741 [2024-07-15 12:26:04.509393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.509407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.519341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.519404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.519418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.519424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.519430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.519444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.529341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.529395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.529410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.529416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.529422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.529436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.539341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.539426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.539440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.539446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.539451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.539470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.549335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.549392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.549405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.549412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.549417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.549431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.559476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.559579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.559592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.559599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.559605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.559619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.569463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.569521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.569535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.569542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.569547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.569560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.579492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.579551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.579565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.579571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.579577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.579591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.589449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.589508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.589526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.589532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.589537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.589551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.599592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.599645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.599661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.599666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.599672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.599686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.609504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.609560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.609575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.609582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.609588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.609601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.619591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.619649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.619662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.619669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.619675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.619688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.629542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.629602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.629616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.629622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.629628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.629644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.639649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.639706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.639720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.639726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.639732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.639746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.649706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.742 [2024-07-15 12:26:04.649761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.742 [2024-07-15 12:26:04.649775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.742 [2024-07-15 12:26:04.649782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.742 [2024-07-15 12:26:04.649788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.742 [2024-07-15 12:26:04.649801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.742 qpair failed and we were unable to recover it. 00:36:14.742 [2024-07-15 12:26:04.659730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.743 [2024-07-15 12:26:04.659840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.743 [2024-07-15 12:26:04.659859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.743 [2024-07-15 12:26:04.659865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.743 [2024-07-15 12:26:04.659872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.743 [2024-07-15 12:26:04.659886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.743 qpair failed and we were unable to recover it. 00:36:14.743 [2024-07-15 12:26:04.669735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.743 [2024-07-15 12:26:04.669795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.743 [2024-07-15 12:26:04.669809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.743 [2024-07-15 12:26:04.669816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.743 [2024-07-15 12:26:04.669821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.743 [2024-07-15 12:26:04.669835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.743 qpair failed and we were unable to recover it. 00:36:14.743 [2024-07-15 12:26:04.679788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.743 [2024-07-15 12:26:04.679849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.743 [2024-07-15 12:26:04.679863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.743 [2024-07-15 12:26:04.679870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.743 [2024-07-15 12:26:04.679875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.743 [2024-07-15 12:26:04.679890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.743 qpair failed and we were unable to recover it. 00:36:14.743 [2024-07-15 12:26:04.689797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.743 [2024-07-15 12:26:04.689856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.743 [2024-07-15 12:26:04.689870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.743 [2024-07-15 12:26:04.689877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.743 [2024-07-15 12:26:04.689883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.743 [2024-07-15 12:26:04.689897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.743 qpair failed and we were unable to recover it. 00:36:14.743 [2024-07-15 12:26:04.699777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.743 [2024-07-15 12:26:04.699865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.743 [2024-07-15 12:26:04.699880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.743 [2024-07-15 12:26:04.699886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.743 [2024-07-15 12:26:04.699892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.743 [2024-07-15 12:26:04.699906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.743 qpair failed and we were unable to recover it. 00:36:14.743 [2024-07-15 12:26:04.709849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.743 [2024-07-15 12:26:04.709906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.743 [2024-07-15 12:26:04.709921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.743 [2024-07-15 12:26:04.709928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.743 [2024-07-15 12:26:04.709934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.743 [2024-07-15 12:26:04.709948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.743 qpair failed and we were unable to recover it. 00:36:14.743 [2024-07-15 12:26:04.719898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.743 [2024-07-15 12:26:04.719954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.743 [2024-07-15 12:26:04.719968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.743 [2024-07-15 12:26:04.719974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.743 [2024-07-15 12:26:04.719983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.743 [2024-07-15 12:26:04.719997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.743 qpair failed and we were unable to recover it. 00:36:14.743 [2024-07-15 12:26:04.729850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.743 [2024-07-15 12:26:04.729908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.743 [2024-07-15 12:26:04.729922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.743 [2024-07-15 12:26:04.729929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.743 [2024-07-15 12:26:04.729935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.743 [2024-07-15 12:26:04.729948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.743 qpair failed and we were unable to recover it. 00:36:14.743 [2024-07-15 12:26:04.739970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.743 [2024-07-15 12:26:04.740079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.743 [2024-07-15 12:26:04.740093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.743 [2024-07-15 12:26:04.740100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.743 [2024-07-15 12:26:04.740105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:14.743 [2024-07-15 12:26:04.740119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:14.743 qpair failed and we were unable to recover it. 00:36:15.001 [2024-07-15 12:26:04.749987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.001 [2024-07-15 12:26:04.750052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.001 [2024-07-15 12:26:04.750067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.001 [2024-07-15 12:26:04.750073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.001 [2024-07-15 12:26:04.750078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.001 [2024-07-15 12:26:04.750092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.759988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.760047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.760061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.760067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.760073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.760087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.770031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.770084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.770100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.770106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.770112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.770126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.780071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.780129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.780143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.780150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.780156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.780170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.790071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.790130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.790144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.790151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.790157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.790170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.800119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.800174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.800188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.800195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.800201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.800215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.810137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.810195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.810210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.810222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.810231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.810246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.820193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.820270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.820284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.820291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.820297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.820311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.830193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.830255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.830269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.830275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.830281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.830295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.840257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.840316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.840331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.840340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.840348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.840364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.850255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.850359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.850374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.850380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.850387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.850401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.860249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.860306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.860320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.860327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.860332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.860346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.870327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.870386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.870400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.870407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.870413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.870426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.880337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.880410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.880424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.880430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.880436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.880450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.002 [2024-07-15 12:26:04.890403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.002 [2024-07-15 12:26:04.890466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.002 [2024-07-15 12:26:04.890480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.002 [2024-07-15 12:26:04.890487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.002 [2024-07-15 12:26:04.890493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.002 [2024-07-15 12:26:04.890507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.002 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.900368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.900440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.900457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.900463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.900469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.900483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.910458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.910517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.910531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.910538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.910543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.910557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.920406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.920465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.920480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.920485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.920491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.920504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.930546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.930645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.930659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.930666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.930672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.930685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.940535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.940594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.940609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.940615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.940621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.940634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.950491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.950551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.950566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.950572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.950578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.950591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.960564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.960671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.960692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.960699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.960705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.960719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.970578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.970636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.970651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.970657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.970663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.970677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.980624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.980681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.980695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.980701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.980708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.980722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.003 [2024-07-15 12:26:04.990633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.003 [2024-07-15 12:26:04.990694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.003 [2024-07-15 12:26:04.990712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.003 [2024-07-15 12:26:04.990718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.003 [2024-07-15 12:26:04.990724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.003 [2024-07-15 12:26:04.990737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.003 qpair failed and we were unable to recover it. 00:36:15.262 [2024-07-15 12:26:05.000701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-07-15 12:26:05.000763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-07-15 12:26:05.000777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-07-15 12:26:05.000783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-07-15 12:26:05.000789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.262 [2024-07-15 12:26:05.000803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-07-15 12:26:05.010656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-07-15 12:26:05.010716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-07-15 12:26:05.010730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-07-15 12:26:05.010737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-07-15 12:26:05.010742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.262 [2024-07-15 12:26:05.010756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-07-15 12:26:05.020740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-07-15 12:26:05.020799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-07-15 12:26:05.020814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-07-15 12:26:05.020820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-07-15 12:26:05.020826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.262 [2024-07-15 12:26:05.020839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-07-15 12:26:05.030769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-07-15 12:26:05.030827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-07-15 12:26:05.030841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-07-15 12:26:05.030848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-07-15 12:26:05.030853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.262 [2024-07-15 12:26:05.030870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-07-15 12:26:05.040798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-07-15 12:26:05.040856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-07-15 12:26:05.040870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-07-15 12:26:05.040877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-07-15 12:26:05.040882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.262 [2024-07-15 12:26:05.040897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-07-15 12:26:05.050784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-07-15 12:26:05.050841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-07-15 12:26:05.050855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-07-15 12:26:05.050862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-07-15 12:26:05.050868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.262 [2024-07-15 12:26:05.050882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-07-15 12:26:05.060896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-07-15 12:26:05.060984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.262 [2024-07-15 12:26:05.060998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.262 [2024-07-15 12:26:05.061004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.262 [2024-07-15 12:26:05.061010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.262 [2024-07-15 12:26:05.061025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.262 qpair failed and we were unable to recover it. 00:36:15.262 [2024-07-15 12:26:05.070871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.262 [2024-07-15 12:26:05.070951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.070965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.070972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.070977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.070991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.080875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.080939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.080957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.080964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.080970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.080984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.090985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.091047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.091061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.091068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.091073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.091087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.100992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.101052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.101066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.101073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.101079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.101092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.111005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.111065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.111080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.111087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.111093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.111106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.121061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.121144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.121158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.121165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.121173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.121187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.131064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.131159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.131175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.131181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.131187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.131201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.141094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.141150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.141164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.141171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.141176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.141190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.151143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.151241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.151256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.151262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.151268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.151282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.161141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.161198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.161213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.263 [2024-07-15 12:26:05.161219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.263 [2024-07-15 12:26:05.161229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.263 [2024-07-15 12:26:05.161242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.263 qpair failed and we were unable to recover it. 00:36:15.263 [2024-07-15 12:26:05.171168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.263 [2024-07-15 12:26:05.171233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.263 [2024-07-15 12:26:05.171248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.264 [2024-07-15 12:26:05.171256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.264 [2024-07-15 12:26:05.171262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.264 [2024-07-15 12:26:05.171277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.264 qpair failed and we were unable to recover it. 00:36:15.264 [2024-07-15 12:26:05.181138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.264 [2024-07-15 12:26:05.181196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.264 [2024-07-15 12:26:05.181210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.264 [2024-07-15 12:26:05.181216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.264 [2024-07-15 12:26:05.181222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.264 [2024-07-15 12:26:05.181240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.264 qpair failed and we were unable to recover it. 00:36:15.264 [2024-07-15 12:26:05.191211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.264 [2024-07-15 12:26:05.191275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.264 [2024-07-15 12:26:05.191290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.264 [2024-07-15 12:26:05.191296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.264 [2024-07-15 12:26:05.191302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.264 [2024-07-15 12:26:05.191316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.264 qpair failed and we were unable to recover it. 00:36:15.264 [2024-07-15 12:26:05.201268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.264 [2024-07-15 12:26:05.201328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.264 [2024-07-15 12:26:05.201342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.264 [2024-07-15 12:26:05.201348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.264 [2024-07-15 12:26:05.201354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.264 [2024-07-15 12:26:05.201368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.264 qpair failed and we were unable to recover it. 00:36:15.264 [2024-07-15 12:26:05.211302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.264 [2024-07-15 12:26:05.211363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.264 [2024-07-15 12:26:05.211377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.264 [2024-07-15 12:26:05.211387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.264 [2024-07-15 12:26:05.211392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.264 [2024-07-15 12:26:05.211406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.264 qpair failed and we were unable to recover it. 00:36:15.264 [2024-07-15 12:26:05.221318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.264 [2024-07-15 12:26:05.221380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.264 [2024-07-15 12:26:05.221394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.264 [2024-07-15 12:26:05.221401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.264 [2024-07-15 12:26:05.221406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.264 [2024-07-15 12:26:05.221420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.264 qpair failed and we were unable to recover it. 00:36:15.264 [2024-07-15 12:26:05.231364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.264 [2024-07-15 12:26:05.231427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.264 [2024-07-15 12:26:05.231441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.264 [2024-07-15 12:26:05.231448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.264 [2024-07-15 12:26:05.231454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.264 [2024-07-15 12:26:05.231467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.264 qpair failed and we were unable to recover it. 00:36:15.264 [2024-07-15 12:26:05.241394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.264 [2024-07-15 12:26:05.241459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.264 [2024-07-15 12:26:05.241474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.264 [2024-07-15 12:26:05.241480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.264 [2024-07-15 12:26:05.241486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.264 [2024-07-15 12:26:05.241500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.264 qpair failed and we were unable to recover it. 00:36:15.264 [2024-07-15 12:26:05.251410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.264 [2024-07-15 12:26:05.251466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.264 [2024-07-15 12:26:05.251481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.264 [2024-07-15 12:26:05.251487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.264 [2024-07-15 12:26:05.251494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.264 [2024-07-15 12:26:05.251507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.264 qpair failed and we were unable to recover it. 00:36:15.523 [2024-07-15 12:26:05.261471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.523 [2024-07-15 12:26:05.261546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.523 [2024-07-15 12:26:05.261560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.523 [2024-07-15 12:26:05.261567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.523 [2024-07-15 12:26:05.261573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.523 [2024-07-15 12:26:05.261586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-07-15 12:26:05.271450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.523 [2024-07-15 12:26:05.271510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.523 [2024-07-15 12:26:05.271525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.523 [2024-07-15 12:26:05.271531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.523 [2024-07-15 12:26:05.271537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.523 [2024-07-15 12:26:05.271551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-07-15 12:26:05.281429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.523 [2024-07-15 12:26:05.281490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.523 [2024-07-15 12:26:05.281505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.523 [2024-07-15 12:26:05.281511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.523 [2024-07-15 12:26:05.281517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.523 [2024-07-15 12:26:05.281531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.523 qpair failed and we were unable to recover it. 00:36:15.523 [2024-07-15 12:26:05.291454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.523 [2024-07-15 12:26:05.291515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.523 [2024-07-15 12:26:05.291529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.523 [2024-07-15 12:26:05.291535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.523 [2024-07-15 12:26:05.291541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.291554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.301496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.301557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.301572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.301582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.301588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.301601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.311562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.311623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.311638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.311644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.311650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.311664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.321574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.321646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.321661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.321668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.321673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.321687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.331625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.331684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.331699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.331705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.331711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.331724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.341673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.341729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.341743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.341750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.341756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.341769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.351689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.351748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.351762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.351769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.351774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.351788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.361778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.361835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.361848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.361855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.361861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.361875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.371743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.371800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.371814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.371821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.371826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.371840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.381786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.381870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.381884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.381890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.381896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.381910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.391784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.391845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.391862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.391868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.391874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.391888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.401845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.401896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.401910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.401916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.401922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.401936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.411834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.411902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.411916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.411922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.411928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.411942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.421898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.421960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.421973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.421980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.421986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.422000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.431914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.524 [2024-07-15 12:26:05.431972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.524 [2024-07-15 12:26:05.431986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.524 [2024-07-15 12:26:05.431992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.524 [2024-07-15 12:26:05.431998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.524 [2024-07-15 12:26:05.432014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.524 qpair failed and we were unable to recover it. 00:36:15.524 [2024-07-15 12:26:05.441942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-07-15 12:26:05.442003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-07-15 12:26:05.442018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-07-15 12:26:05.442024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-07-15 12:26:05.442030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.525 [2024-07-15 12:26:05.442044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-07-15 12:26:05.451978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-07-15 12:26:05.452030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-07-15 12:26:05.452045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-07-15 12:26:05.452051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-07-15 12:26:05.452057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.525 [2024-07-15 12:26:05.452071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-07-15 12:26:05.462018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-07-15 12:26:05.462077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-07-15 12:26:05.462091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-07-15 12:26:05.462097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-07-15 12:26:05.462103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.525 [2024-07-15 12:26:05.462117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-07-15 12:26:05.472037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-07-15 12:26:05.472095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-07-15 12:26:05.472109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-07-15 12:26:05.472116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-07-15 12:26:05.472122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.525 [2024-07-15 12:26:05.472136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-07-15 12:26:05.482058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-07-15 12:26:05.482112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-07-15 12:26:05.482129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-07-15 12:26:05.482135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-07-15 12:26:05.482140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.525 [2024-07-15 12:26:05.482154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-07-15 12:26:05.492081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-07-15 12:26:05.492147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-07-15 12:26:05.492162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-07-15 12:26:05.492168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-07-15 12:26:05.492173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.525 [2024-07-15 12:26:05.492187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-07-15 12:26:05.502121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-07-15 12:26:05.502177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-07-15 12:26:05.502191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-07-15 12:26:05.502197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-07-15 12:26:05.502203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.525 [2024-07-15 12:26:05.502217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.525 [2024-07-15 12:26:05.512147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.525 [2024-07-15 12:26:05.512209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.525 [2024-07-15 12:26:05.512223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.525 [2024-07-15 12:26:05.512234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.525 [2024-07-15 12:26:05.512240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.525 [2024-07-15 12:26:05.512254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.525 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.522174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.522251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.522265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.522272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.522283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.522297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.532230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.532290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.532304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.532310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.532316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.532330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.542247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.542307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.542320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.542327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.542333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.542347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.552272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.552331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.552345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.552351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.552357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.552371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.562302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.562358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.562373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.562379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.562385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.562399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.572333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.572408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.572423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.572429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.572435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.572449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.582363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.582422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.582436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.582442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.582448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.582462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.592404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.592491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.592506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.592512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.592518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.592532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.602415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.602471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.602486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.602492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.602498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.602511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.785 [2024-07-15 12:26:05.612442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.785 [2024-07-15 12:26:05.612497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.785 [2024-07-15 12:26:05.612512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.785 [2024-07-15 12:26:05.612518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.785 [2024-07-15 12:26:05.612527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.785 [2024-07-15 12:26:05.612541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.785 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.622477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.622538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.622553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.622559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.622565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.622578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.632493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.632556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.632571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.632577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.632582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.632596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.642524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.642583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.642597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.642603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.642609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.642622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.652531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.652583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.652597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.652603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.652609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.652622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.662602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.662662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.662676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.662683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.662688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.662702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.672587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.672652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.672666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.672672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.672678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.672691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.682653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.682712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.682726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.682732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.682738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.682751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.692671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.692726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.692740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.692747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.692752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.692766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.702703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.702766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.702780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.702790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.702796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.702809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.712761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.712850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.712864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.712871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.712876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.712890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.722800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.722856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.722870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.722877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.722882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.722896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.732787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.732840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.732854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.732860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.732865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.732879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.742836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.742895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.742908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.742915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.742920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.742934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.752840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.752922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.786 [2024-07-15 12:26:05.752937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.786 [2024-07-15 12:26:05.752943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.786 [2024-07-15 12:26:05.752949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.786 [2024-07-15 12:26:05.752963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.786 qpair failed and we were unable to recover it. 00:36:15.786 [2024-07-15 12:26:05.762873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.786 [2024-07-15 12:26:05.762933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-07-15 12:26:05.762948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-07-15 12:26:05.762954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-07-15 12:26:05.762960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.787 [2024-07-15 12:26:05.762974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-07-15 12:26:05.772921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-07-15 12:26:05.772980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-07-15 12:26:05.772995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-07-15 12:26:05.773001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-07-15 12:26:05.773007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.787 [2024-07-15 12:26:05.773021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.787 qpair failed and we were unable to recover it. 00:36:15.787 [2024-07-15 12:26:05.782940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.787 [2024-07-15 12:26:05.783004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.787 [2024-07-15 12:26:05.783019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.787 [2024-07-15 12:26:05.783026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.787 [2024-07-15 12:26:05.783032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:15.787 [2024-07-15 12:26:05.783045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:15.787 qpair failed and we were unable to recover it. 00:36:16.047 [2024-07-15 12:26:05.792943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.047 [2024-07-15 12:26:05.793046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.047 [2024-07-15 12:26:05.793063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.047 [2024-07-15 12:26:05.793070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.047 [2024-07-15 12:26:05.793076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.047 [2024-07-15 12:26:05.793090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.047 qpair failed and we were unable to recover it. 00:36:16.047 [2024-07-15 12:26:05.802918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.047 [2024-07-15 12:26:05.803015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.047 [2024-07-15 12:26:05.803029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.047 [2024-07-15 12:26:05.803036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.047 [2024-07-15 12:26:05.803042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.047 [2024-07-15 12:26:05.803056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.047 qpair failed and we were unable to recover it. 00:36:16.047 [2024-07-15 12:26:05.813013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.047 [2024-07-15 12:26:05.813073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.047 [2024-07-15 12:26:05.813088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.047 [2024-07-15 12:26:05.813094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.047 [2024-07-15 12:26:05.813100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.047 [2024-07-15 12:26:05.813114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.047 qpair failed and we were unable to recover it. 00:36:16.047 [2024-07-15 12:26:05.823046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.047 [2024-07-15 12:26:05.823104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.047 [2024-07-15 12:26:05.823117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.047 [2024-07-15 12:26:05.823124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.047 [2024-07-15 12:26:05.823129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.047 [2024-07-15 12:26:05.823143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.047 qpair failed and we were unable to recover it. 00:36:16.047 [2024-07-15 12:26:05.833077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.047 [2024-07-15 12:26:05.833138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.047 [2024-07-15 12:26:05.833153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.047 [2024-07-15 12:26:05.833159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.047 [2024-07-15 12:26:05.833165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.047 [2024-07-15 12:26:05.833182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.047 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.843102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.843157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.843171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.843178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.843184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.843198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.853058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.853114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.853128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.853134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.853140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.853153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.863171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.863234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.863248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.863254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.863261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.863274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.873184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.873248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.873262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.873268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.873274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.873288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.883213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.883277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.883295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.883301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.883306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.883320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.893248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.893309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.893324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.893330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.893335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.893349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.903313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.903372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.903386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.903392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.903398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.903412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.913300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.913363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.913377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.913383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.913388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.913402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.923337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.923398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.923412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.923418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.923427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.923442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.933301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.933366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.933380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.933386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.933392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.933406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.943438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.943506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.943520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.943526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.943532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.943546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.953358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.953414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.953428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.953435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.953440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.953454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.963439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.963498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.963512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.963519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.963524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.963538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.973477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.973540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.973554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.973560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.048 [2024-07-15 12:26:05.973566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.048 [2024-07-15 12:26:05.973580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.048 qpair failed and we were unable to recover it. 00:36:16.048 [2024-07-15 12:26:05.983509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.048 [2024-07-15 12:26:05.983569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.048 [2024-07-15 12:26:05.983584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.048 [2024-07-15 12:26:05.983590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-07-15 12:26:05.983595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.049 [2024-07-15 12:26:05.983609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-07-15 12:26:05.993531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-07-15 12:26:05.993587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-07-15 12:26:05.993601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-07-15 12:26:05.993607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-07-15 12:26:05.993613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.049 [2024-07-15 12:26:05.993627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-07-15 12:26:06.003496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-07-15 12:26:06.003552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-07-15 12:26:06.003565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-07-15 12:26:06.003572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-07-15 12:26:06.003577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.049 [2024-07-15 12:26:06.003591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-07-15 12:26:06.013582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-07-15 12:26:06.013640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-07-15 12:26:06.013654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-07-15 12:26:06.013661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-07-15 12:26:06.013669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.049 [2024-07-15 12:26:06.013683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-07-15 12:26:06.023622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-07-15 12:26:06.023679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-07-15 12:26:06.023693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-07-15 12:26:06.023699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-07-15 12:26:06.023705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.049 [2024-07-15 12:26:06.023718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-07-15 12:26:06.033640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-07-15 12:26:06.033700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-07-15 12:26:06.033714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-07-15 12:26:06.033720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-07-15 12:26:06.033726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.049 [2024-07-15 12:26:06.033740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.049 [2024-07-15 12:26:06.043680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.049 [2024-07-15 12:26:06.043737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.049 [2024-07-15 12:26:06.043750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.049 [2024-07-15 12:26:06.043757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.049 [2024-07-15 12:26:06.043763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.049 [2024-07-15 12:26:06.043776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.049 qpair failed and we were unable to recover it. 00:36:16.328 [2024-07-15 12:26:06.053702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.328 [2024-07-15 12:26:06.053756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.328 [2024-07-15 12:26:06.053769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.328 [2024-07-15 12:26:06.053776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.328 [2024-07-15 12:26:06.053782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.328 [2024-07-15 12:26:06.053795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.328 qpair failed and we were unable to recover it. 00:36:16.328 [2024-07-15 12:26:06.063743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.328 [2024-07-15 12:26:06.063801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.328 [2024-07-15 12:26:06.063816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.328 [2024-07-15 12:26:06.063822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.328 [2024-07-15 12:26:06.063827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.328 [2024-07-15 12:26:06.063841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.328 qpair failed and we were unable to recover it. 00:36:16.328 [2024-07-15 12:26:06.073771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.328 [2024-07-15 12:26:06.073830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.328 [2024-07-15 12:26:06.073844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.328 [2024-07-15 12:26:06.073850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.328 [2024-07-15 12:26:06.073856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.328 [2024-07-15 12:26:06.073870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.328 qpair failed and we were unable to recover it. 00:36:16.328 [2024-07-15 12:26:06.083768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.328 [2024-07-15 12:26:06.083854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.083869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.083876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.083882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.083897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.093865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.093921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.093935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.093942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.093947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.093961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.103848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.103909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.103923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.103932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.103938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.103952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.113877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.113938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.113952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.113958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.113964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.113978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.123905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.123959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.123972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.123979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.123984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.123998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.133941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.134047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.134062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.134068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.134073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.134087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.143975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.144030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.144044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.144050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.144056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.144070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.154002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.154062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.154076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.154083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.154089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.154102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.164031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.164086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.164100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.164107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.164113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.164127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.174090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.174195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.174216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.174222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.329 [2024-07-15 12:26:06.174232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.329 [2024-07-15 12:26:06.174246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.329 qpair failed and we were unable to recover it. 00:36:16.329 [2024-07-15 12:26:06.184093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.329 [2024-07-15 12:26:06.184149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.329 [2024-07-15 12:26:06.184163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.329 [2024-07-15 12:26:06.184169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.184175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.184189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.194119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.194180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.194198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.194204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.194210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.194229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.204133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.204190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.204204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.204210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.204216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.204234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.214164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.214222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.214240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.214247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.214253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.214267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.224201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.224264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.224278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.224285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.224290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.224304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.234217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.234276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.234291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.234298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.234303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.234320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.244200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.244262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.244277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.244283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.244289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.244303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.254285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.254342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.254357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.254363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.254369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.254383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.264374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.264463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.264477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.264484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.264489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.264504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.274345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.274442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.274456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.274462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.274468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.274482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.284385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.284447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.284466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.330 [2024-07-15 12:26:06.284473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.330 [2024-07-15 12:26:06.284478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.330 [2024-07-15 12:26:06.284492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.330 qpair failed and we were unable to recover it. 00:36:16.330 [2024-07-15 12:26:06.294422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.330 [2024-07-15 12:26:06.294477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.330 [2024-07-15 12:26:06.294491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.331 [2024-07-15 12:26:06.294497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.331 [2024-07-15 12:26:06.294503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.331 [2024-07-15 12:26:06.294516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.331 qpair failed and we were unable to recover it. 00:36:16.331 [2024-07-15 12:26:06.304459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.331 [2024-07-15 12:26:06.304523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.331 [2024-07-15 12:26:06.304538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.331 [2024-07-15 12:26:06.304544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.331 [2024-07-15 12:26:06.304550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.331 [2024-07-15 12:26:06.304564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.331 qpair failed and we were unable to recover it. 00:36:16.331 [2024-07-15 12:26:06.314457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.331 [2024-07-15 12:26:06.314517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.331 [2024-07-15 12:26:06.314532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.331 [2024-07-15 12:26:06.314538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.331 [2024-07-15 12:26:06.314544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.331 [2024-07-15 12:26:06.314558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.331 qpair failed and we were unable to recover it. 00:36:16.331 [2024-07-15 12:26:06.324427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.331 [2024-07-15 12:26:06.324484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.331 [2024-07-15 12:26:06.324498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.331 [2024-07-15 12:26:06.324505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.331 [2024-07-15 12:26:06.324511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.331 [2024-07-15 12:26:06.324528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.331 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.334518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.334608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.334623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.334630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.334635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.334649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.344595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.344674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.344689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.344696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.344702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.344716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.354578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.354637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.354651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.354658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.354664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.354677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.364619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.364678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.364692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.364698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.364704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.364718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.374628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.374688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.374703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.374709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.374715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.374728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.384675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.384768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.384782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.384788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.384793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.384808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.394696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.394757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.394771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.394777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.394783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.394796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.404718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.404775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.404790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.404797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.404803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.404817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.414801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.414864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.414878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.414885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.414894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.414907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.424811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.424904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.424918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.424924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.424930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.424945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.434814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.434916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.434931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.434937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.434943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.434957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.444989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.591 [2024-07-15 12:26:06.445062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.591 [2024-07-15 12:26:06.445077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.591 [2024-07-15 12:26:06.445083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.591 [2024-07-15 12:26:06.445089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.591 [2024-07-15 12:26:06.445103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.591 qpair failed and we were unable to recover it. 00:36:16.591 [2024-07-15 12:26:06.454904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.454986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.455000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.455007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.455012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.455026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.464941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.465004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.465019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.465025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.465030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.465044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.474887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.474946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.474959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.474966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.474971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.474985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.484921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.484979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.484993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.484999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.485006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.485020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.495027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.495081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.495095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.495101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.495107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.495121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.504970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.505029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.505043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.505053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.505059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.505073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.515107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.515171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.515185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.515191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.515197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.515211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.525112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.525176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.525190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.525195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.525201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.525215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.535118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.535175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.535189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.535195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.535201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.535215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.545084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.545152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.545166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.545172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.545178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.545192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.555218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.555301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.555315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.555322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.555327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.555341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.565267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.565344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.565358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.565364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.565369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.565384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.575236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.575295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.575309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.575315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.575321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.575335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.592 [2024-07-15 12:26:06.585286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.592 [2024-07-15 12:26:06.585355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.592 [2024-07-15 12:26:06.585369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.592 [2024-07-15 12:26:06.585375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.592 [2024-07-15 12:26:06.585381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.592 [2024-07-15 12:26:06.585395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.592 qpair failed and we were unable to recover it. 00:36:16.852 [2024-07-15 12:26:06.595303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.852 [2024-07-15 12:26:06.595361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.852 [2024-07-15 12:26:06.595378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.852 [2024-07-15 12:26:06.595385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.852 [2024-07-15 12:26:06.595391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.852 [2024-07-15 12:26:06.595405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.852 qpair failed and we were unable to recover it. 00:36:16.852 [2024-07-15 12:26:06.605279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.852 [2024-07-15 12:26:06.605339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.852 [2024-07-15 12:26:06.605353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.852 [2024-07-15 12:26:06.605359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.852 [2024-07-15 12:26:06.605365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.852 [2024-07-15 12:26:06.605379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.852 qpair failed and we were unable to recover it. 00:36:16.852 [2024-07-15 12:26:06.615366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.852 [2024-07-15 12:26:06.615464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.852 [2024-07-15 12:26:06.615478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.852 [2024-07-15 12:26:06.615485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.852 [2024-07-15 12:26:06.615490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.852 [2024-07-15 12:26:06.615504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.852 qpair failed and we were unable to recover it. 00:36:16.852 [2024-07-15 12:26:06.625390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.852 [2024-07-15 12:26:06.625449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.852 [2024-07-15 12:26:06.625463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.852 [2024-07-15 12:26:06.625469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.852 [2024-07-15 12:26:06.625475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.852 [2024-07-15 12:26:06.625489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.852 qpair failed and we were unable to recover it. 00:36:16.852 [2024-07-15 12:26:06.635340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.852 [2024-07-15 12:26:06.635412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.852 [2024-07-15 12:26:06.635427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.852 [2024-07-15 12:26:06.635433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.852 [2024-07-15 12:26:06.635438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.852 [2024-07-15 12:26:06.635452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.852 qpair failed and we were unable to recover it. 00:36:16.852 [2024-07-15 12:26:06.645367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.852 [2024-07-15 12:26:06.645426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.852 [2024-07-15 12:26:06.645441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.852 [2024-07-15 12:26:06.645447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.852 [2024-07-15 12:26:06.645453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.852 [2024-07-15 12:26:06.645466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.852 qpair failed and we were unable to recover it. 00:36:16.852 [2024-07-15 12:26:06.655467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.852 [2024-07-15 12:26:06.655523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.852 [2024-07-15 12:26:06.655538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.852 [2024-07-15 12:26:06.655544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.852 [2024-07-15 12:26:06.655550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.852 [2024-07-15 12:26:06.655564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.852 qpair failed and we were unable to recover it. 00:36:16.852 [2024-07-15 12:26:06.665448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.852 [2024-07-15 12:26:06.665508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.852 [2024-07-15 12:26:06.665522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.852 [2024-07-15 12:26:06.665528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.852 [2024-07-15 12:26:06.665534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.852 [2024-07-15 12:26:06.665548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.852 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.675549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.675629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.675643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.675649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.675655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.675668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.685538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.685598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.685615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.685621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.685627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.685640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.695522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.695582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.695597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.695603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.695608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.695623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.705651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.705710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.705724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.705730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.705736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.705749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.715644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.715702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.715716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.715723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.715729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.715742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.725675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.725733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.725747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.725754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.725759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.725776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.735654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.735713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.735727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.735733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.735739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.735753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.745740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.745796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.745811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.745817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.745823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.745836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.755772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.755831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.755845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.755851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.755857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.755870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.765797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.765856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.765870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.765877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.765882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.765896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.775770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.775826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.775845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.775851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.775857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.775871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.785907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.785963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.785977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.785984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.785989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.786003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.795905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.795975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.795990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.795996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.796001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.796015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.805853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.805909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.853 [2024-07-15 12:26:06.805923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.853 [2024-07-15 12:26:06.805930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.853 [2024-07-15 12:26:06.805935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.853 [2024-07-15 12:26:06.805949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.853 qpair failed and we were unable to recover it. 00:36:16.853 [2024-07-15 12:26:06.815988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.853 [2024-07-15 12:26:06.816054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.854 [2024-07-15 12:26:06.816069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.854 [2024-07-15 12:26:06.816075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.854 [2024-07-15 12:26:06.816084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.854 [2024-07-15 12:26:06.816098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.854 qpair failed and we were unable to recover it. 00:36:16.854 [2024-07-15 12:26:06.825987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.854 [2024-07-15 12:26:06.826049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.854 [2024-07-15 12:26:06.826063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.854 [2024-07-15 12:26:06.826069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.854 [2024-07-15 12:26:06.826075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.854 [2024-07-15 12:26:06.826088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.854 qpair failed and we were unable to recover it. 00:36:16.854 [2024-07-15 12:26:06.835989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.854 [2024-07-15 12:26:06.836049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.854 [2024-07-15 12:26:06.836063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.854 [2024-07-15 12:26:06.836069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.854 [2024-07-15 12:26:06.836075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.854 [2024-07-15 12:26:06.836089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.854 qpair failed and we were unable to recover it. 00:36:16.854 [2024-07-15 12:26:06.846037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.854 [2024-07-15 12:26:06.846091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.854 [2024-07-15 12:26:06.846105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.854 [2024-07-15 12:26:06.846112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.854 [2024-07-15 12:26:06.846118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:16.854 [2024-07-15 12:26:06.846132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:16.854 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.856142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.856201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.856215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.856221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.856231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.856244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.866037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.866100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.866114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.866120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.866126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.866140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.876129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.876189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.876203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.876210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.876215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.876232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.886134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.886230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.886245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.886251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.886257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.886271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.896136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.896191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.896206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.896212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.896218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.896236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.906230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.906290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.906304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.906313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.906319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.906333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.916267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.916339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.916354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.916360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.916366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.916380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.926278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.926334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.926348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.926355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.926360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.926374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.936356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.936414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.936428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.936435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.936441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.936455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.946399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.946458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.946473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.113 [2024-07-15 12:26:06.946479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.113 [2024-07-15 12:26:06.946485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.113 [2024-07-15 12:26:06.946499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.113 qpair failed and we were unable to recover it. 00:36:17.113 [2024-07-15 12:26:06.956403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.113 [2024-07-15 12:26:06.956457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.113 [2024-07-15 12:26:06.956472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:06.956478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:06.956484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:06.956498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:06.966401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:06.966460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:06.966474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:06.966481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:06.966487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:06.966500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:06.976422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:06.976473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:06.976487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:06.976493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:06.976499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:06.976512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:06.986465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:06.986520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:06.986534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:06.986540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:06.986546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:06.986560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:06.996527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:06.996605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:06.996620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:06.996629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:06.996634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:06.996649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:07.006513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:07.006571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:07.006585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:07.006592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:07.006597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:07.006612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:07.016553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:07.016607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:07.016621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:07.016627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:07.016633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:07.016646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:07.026583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:07.026645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:07.026659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:07.026665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:07.026671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:07.026684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:07.036536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:07.036598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:07.036612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:07.036619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:07.036624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:07.036638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:07.046632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:07.046689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:07.046703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:07.046709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:07.046715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:07.046729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:07.056699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:07.056756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:07.056771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:07.056777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:07.056783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:07.056796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:07.066708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:07.066802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:07.066816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:07.066823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:07.066828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:07.066842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:07.076713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:07.076774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.114 [2024-07-15 12:26:07.076789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.114 [2024-07-15 12:26:07.076795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.114 [2024-07-15 12:26:07.076801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.114 [2024-07-15 12:26:07.076815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.114 qpair failed and we were unable to recover it. 00:36:17.114 [2024-07-15 12:26:07.086765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.114 [2024-07-15 12:26:07.086844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.115 [2024-07-15 12:26:07.086862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.115 [2024-07-15 12:26:07.086869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.115 [2024-07-15 12:26:07.086875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.115 [2024-07-15 12:26:07.086890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.115 qpair failed and we were unable to recover it. 00:36:17.115 [2024-07-15 12:26:07.096789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.115 [2024-07-15 12:26:07.096847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.115 [2024-07-15 12:26:07.096862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.115 [2024-07-15 12:26:07.096868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.115 [2024-07-15 12:26:07.096873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.115 [2024-07-15 12:26:07.096887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.115 qpair failed and we were unable to recover it. 00:36:17.115 [2024-07-15 12:26:07.106855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.115 [2024-07-15 12:26:07.106942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.115 [2024-07-15 12:26:07.106955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.115 [2024-07-15 12:26:07.106962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.115 [2024-07-15 12:26:07.106967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.115 [2024-07-15 12:26:07.106982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.115 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.116825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.373 [2024-07-15 12:26:07.116892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.373 [2024-07-15 12:26:07.116906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.373 [2024-07-15 12:26:07.116912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.373 [2024-07-15 12:26:07.116918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.373 [2024-07-15 12:26:07.116931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.373 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.126926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.373 [2024-07-15 12:26:07.126978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.373 [2024-07-15 12:26:07.126993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.373 [2024-07-15 12:26:07.126999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.373 [2024-07-15 12:26:07.127004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.373 [2024-07-15 12:26:07.127022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.373 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.136894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.373 [2024-07-15 12:26:07.136950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.373 [2024-07-15 12:26:07.136965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.373 [2024-07-15 12:26:07.136972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.373 [2024-07-15 12:26:07.136978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.373 [2024-07-15 12:26:07.136992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.373 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.146919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.373 [2024-07-15 12:26:07.146988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.373 [2024-07-15 12:26:07.147002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.373 [2024-07-15 12:26:07.147009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.373 [2024-07-15 12:26:07.147015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.373 [2024-07-15 12:26:07.147029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.373 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.156970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.373 [2024-07-15 12:26:07.157028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.373 [2024-07-15 12:26:07.157043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.373 [2024-07-15 12:26:07.157049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.373 [2024-07-15 12:26:07.157055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.373 [2024-07-15 12:26:07.157069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.373 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.166983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.373 [2024-07-15 12:26:07.167036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.373 [2024-07-15 12:26:07.167050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.373 [2024-07-15 12:26:07.167056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.373 [2024-07-15 12:26:07.167062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.373 [2024-07-15 12:26:07.167076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.373 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.177005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.373 [2024-07-15 12:26:07.177062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.373 [2024-07-15 12:26:07.177079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.373 [2024-07-15 12:26:07.177086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.373 [2024-07-15 12:26:07.177092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.373 [2024-07-15 12:26:07.177106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.373 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.187026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.373 [2024-07-15 12:26:07.187098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.373 [2024-07-15 12:26:07.187112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.373 [2024-07-15 12:26:07.187118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.373 [2024-07-15 12:26:07.187124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.373 [2024-07-15 12:26:07.187137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.373 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.197103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.373 [2024-07-15 12:26:07.197164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.373 [2024-07-15 12:26:07.197179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.373 [2024-07-15 12:26:07.197185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.373 [2024-07-15 12:26:07.197191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.373 [2024-07-15 12:26:07.197205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.373 qpair failed and we were unable to recover it. 00:36:17.373 [2024-07-15 12:26:07.207052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.207146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.207161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.207167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.207173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.207188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.217157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.217213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.217231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.217238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.217248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.217262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.227142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.227203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.227217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.227227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.227234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.227248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.237181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.237244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.237258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.237265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.237271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.237285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.247210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.247268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.247282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.247288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.247294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.247308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.257245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.257302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.257317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.257323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.257330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.257344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.267237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.267330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.267344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.267350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.267356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.267370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.277325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.277416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.277430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.277437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.277442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.277456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.287313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.287387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.287402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.287409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.287414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.287431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.297349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.297407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.297422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.297428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.297434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.297448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.307415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.307470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.307484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.307490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.307499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:17.374 [2024-07-15 12:26:07.307512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.317470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.317590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.317646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.317671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.317691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa244000b90 00:36:17.374 [2024-07-15 12:26:07.317739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.327486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.327574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.327602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.327617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.327637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa244000b90 00:36:17.374 [2024-07-15 12:26:07.327666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.337541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.337663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.374 [2024-07-15 12:26:07.337718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.374 [2024-07-15 12:26:07.337742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.374 [2024-07-15 12:26:07.337763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa23c000b90 00:36:17.374 [2024-07-15 12:26:07.337812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.374 qpair failed and we were unable to recover it. 00:36:17.374 [2024-07-15 12:26:07.347518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.374 [2024-07-15 12:26:07.347609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.375 [2024-07-15 12:26:07.347637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.375 [2024-07-15 12:26:07.347651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.375 [2024-07-15 12:26:07.347664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa23c000b90 00:36:17.375 [2024-07-15 12:26:07.347699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:17.375 qpair failed and we were unable to recover it. 00:36:17.375 [2024-07-15 12:26:07.347803] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:17.375 A controller has encountered a failure and is being reset. 00:36:17.375 [2024-07-15 12:26:07.357580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.375 [2024-07-15 12:26:07.357704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.375 [2024-07-15 12:26:07.357760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.375 [2024-07-15 12:26:07.357785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.375 [2024-07-15 12:26:07.357806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15e3b60 00:36:17.375 [2024-07-15 12:26:07.357853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.375 qpair failed and we were unable to recover it. 00:36:17.375 [2024-07-15 12:26:07.367586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.375 [2024-07-15 12:26:07.367671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.375 [2024-07-15 12:26:07.367700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.375 [2024-07-15 12:26:07.367715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.375 [2024-07-15 12:26:07.367727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15e3b60 00:36:17.375 [2024-07-15 12:26:07.367755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.375 qpair failed and we were unable to recover it. 00:36:17.633 Controller properly reset. 00:36:17.633 Initializing NVMe Controllers 00:36:17.633 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:17.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:17.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:17.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:17.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:17.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:17.633 Initialization complete. Launching workers. 00:36:17.633 Starting thread on core 1 00:36:17.633 Starting thread on core 2 00:36:17.633 Starting thread on core 3 00:36:17.633 Starting thread on core 0 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:17.633 00:36:17.633 real 0m10.772s 00:36:17.633 user 0m19.187s 00:36:17.633 sys 0m4.670s 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:17.633 ************************************ 00:36:17.633 END TEST nvmf_target_disconnect_tc2 00:36:17.633 ************************************ 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:17.633 rmmod nvme_tcp 00:36:17.633 rmmod nvme_fabrics 00:36:17.633 rmmod nvme_keyring 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1367844 ']' 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1367844 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1367844 ']' 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1367844 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1367844 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1367844' 00:36:17.633 killing process with pid 1367844 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1367844 00:36:17.633 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1367844 00:36:17.891 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:17.891 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:17.891 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:17.891 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:17.891 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:17.891 12:26:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.891 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:17.891 12:26:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.440 12:26:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:20.440 00:36:20.440 real 0m19.257s 00:36:20.440 user 0m46.696s 00:36:20.440 sys 0m9.478s 00:36:20.440 12:26:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:20.440 12:26:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:20.440 ************************************ 00:36:20.440 END TEST nvmf_target_disconnect 00:36:20.440 ************************************ 00:36:20.440 12:26:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:20.440 12:26:09 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:36:20.440 12:26:09 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:20.440 12:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.440 12:26:09 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:36:20.440 00:36:20.440 real 28m58.199s 00:36:20.440 user 73m52.836s 00:36:20.440 sys 7m52.431s 00:36:20.440 12:26:09 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:20.440 12:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.440 ************************************ 00:36:20.440 END TEST nvmf_tcp 00:36:20.440 ************************************ 00:36:20.440 12:26:09 -- common/autotest_common.sh@1142 -- # return 0 00:36:20.440 12:26:09 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:36:20.440 12:26:09 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:20.440 12:26:09 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:20.440 12:26:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:20.440 12:26:09 -- common/autotest_common.sh@10 -- # set +x 00:36:20.440 ************************************ 00:36:20.440 START TEST spdkcli_nvmf_tcp 00:36:20.440 ************************************ 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:20.440 * Looking for test storage... 00:36:20.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1369370 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1369370 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1369370 ']' 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:20.440 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.440 [2024-07-15 12:26:10.183704] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:36:20.440 [2024-07-15 12:26:10.183754] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369370 ] 00:36:20.440 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.441 [2024-07-15 12:26:10.250164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:20.441 [2024-07-15 12:26:10.292265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:20.441 [2024-07-15 12:26:10.292266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.441 12:26:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:20.441 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:20.441 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:20.441 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:20.441 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:20.441 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:20.441 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:20.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:20.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:20.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:20.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:20.441 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:20.441 ' 00:36:23.726 [2024-07-15 12:26:12.989578] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.292 [2024-07-15 12:26:14.273839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:26.825 [2024-07-15 12:26:16.657133] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:28.752 [2024-07-15 12:26:18.711550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:30.716 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:30.716 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:30.716 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:30.716 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:30.716 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:30.716 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:30.716 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:30.716 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:30.716 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:30.716 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:30.716 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:30.716 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:30.716 12:26:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:30.716 12:26:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:30.716 12:26:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:30.716 12:26:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:30.716 12:26:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:30.716 12:26:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:30.716 12:26:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:30.716 12:26:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:30.975 12:26:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:30.975 12:26:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:30.975 12:26:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:30.975 12:26:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:30.975 12:26:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:30.975 12:26:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:30.975 12:26:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:30.975 12:26:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:30.975 12:26:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:30.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:30.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:30.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:30.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:30.975 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:30.975 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:30.975 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:30.975 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:30.975 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:30.975 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:30.975 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:30.975 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:30.975 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:30.975 ' 00:36:36.280 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:36.280 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:36.280 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:36.280 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:36.280 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:36.280 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:36.280 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:36.280 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:36.280 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:36.280 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:36.280 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:36.280 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:36.280 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:36.280 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:36.280 12:26:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:36.281 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:36.281 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1369370 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1369370 ']' 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1369370 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1369370 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1369370' 00:36:36.540 killing process with pid 1369370 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1369370 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1369370 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1369370 ']' 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1369370 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1369370 ']' 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1369370 00:36:36.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1369370) - No such process 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1369370 is not found' 00:36:36.540 Process with pid 1369370 is not found 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:36.540 00:36:36.540 real 0m16.492s 00:36:36.540 user 0m35.901s 00:36:36.540 sys 0m0.827s 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:36.540 12:26:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:36.540 ************************************ 00:36:36.540 END TEST spdkcli_nvmf_tcp 00:36:36.540 ************************************ 00:36:36.799 12:26:26 -- common/autotest_common.sh@1142 -- # return 0 00:36:36.799 12:26:26 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:36.799 12:26:26 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:36.799 12:26:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:36.799 12:26:26 -- common/autotest_common.sh@10 -- # set +x 00:36:36.799 ************************************ 00:36:36.799 START TEST nvmf_identify_passthru 00:36:36.799 ************************************ 00:36:36.799 12:26:26 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:36.799 * Looking for test storage... 00:36:36.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:36.799 12:26:26 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:36.799 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:36.800 12:26:26 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:36.800 12:26:26 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:36.800 12:26:26 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:36.800 12:26:26 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:36.800 12:26:26 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:36.800 12:26:26 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:36.800 12:26:26 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:36.800 12:26:26 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.800 12:26:26 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.800 12:26:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:36.800 12:26:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:36.800 12:26:26 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:36:36.800 12:26:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:36:42.071 12:26:31 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:42.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:42.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:42.071 Found net devices under 0000:86:00.0: cvl_0_0 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:42.071 Found net devices under 0000:86:00.1: cvl_0_1 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:42.071 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:42.072 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:42.072 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:42.072 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:42.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:42.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:36:42.333 00:36:42.333 --- 10.0.0.2 ping statistics --- 00:36:42.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.333 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:42.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:42.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:36:42.333 00:36:42.333 --- 10.0.0.1 ping statistics --- 00:36:42.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.333 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:42.333 12:26:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:42.333 12:26:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:42.333 12:26:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:42.333 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:36:42.593 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:36:42.593 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:36:42.593 12:26:32 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:36:42.593 12:26:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:36:42.593 12:26:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:36:42.593 12:26:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:42.593 12:26:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:42.593 12:26:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:42.593 EAL: No free 2048 kB hugepages reported on node 1 00:36:46.780 12:26:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:36:46.780 12:26:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:46.780 12:26:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:46.780 12:26:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:46.780 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.968 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:50.968 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.968 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.968 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1376391 00:36:50.968 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:50.968 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:50.968 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1376391 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1376391 ']' 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:50.968 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.968 [2024-07-15 12:26:40.732475] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:36:50.968 [2024-07-15 12:26:40.732523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:50.968 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.968 [2024-07-15 12:26:40.801088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:50.968 [2024-07-15 12:26:40.842917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.969 [2024-07-15 12:26:40.842956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.969 [2024-07-15 12:26:40.842964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:50.969 [2024-07-15 12:26:40.842970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:50.969 [2024-07-15 12:26:40.842975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.969 [2024-07-15 12:26:40.843033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.969 [2024-07-15 12:26:40.843164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:50.969 [2024-07-15 12:26:40.843282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.969 [2024-07-15 12:26:40.843283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:36:50.969 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.969 INFO: Log level set to 20 00:36:50.969 INFO: Requests: 00:36:50.969 { 00:36:50.969 "jsonrpc": "2.0", 00:36:50.969 "method": "nvmf_set_config", 00:36:50.969 "id": 1, 00:36:50.969 "params": { 00:36:50.969 "admin_cmd_passthru": { 00:36:50.969 "identify_ctrlr": true 00:36:50.969 } 00:36:50.969 } 00:36:50.969 } 00:36:50.969 00:36:50.969 INFO: response: 00:36:50.969 { 00:36:50.969 "jsonrpc": "2.0", 00:36:50.969 "id": 1, 00:36:50.969 "result": true 00:36:50.969 } 00:36:50.969 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.969 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.969 INFO: Setting log level to 20 00:36:50.969 INFO: Setting log level to 20 00:36:50.969 INFO: Log level set to 20 00:36:50.969 INFO: Log level set to 20 00:36:50.969 INFO: Requests: 00:36:50.969 { 00:36:50.969 "jsonrpc": "2.0", 00:36:50.969 "method": "framework_start_init", 00:36:50.969 "id": 1 00:36:50.969 } 00:36:50.969 00:36:50.969 INFO: Requests: 00:36:50.969 { 00:36:50.969 "jsonrpc": "2.0", 00:36:50.969 "method": "framework_start_init", 00:36:50.969 "id": 1 00:36:50.969 } 00:36:50.969 00:36:50.969 [2024-07-15 12:26:40.955135] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:50.969 INFO: response: 00:36:50.969 { 00:36:50.969 "jsonrpc": "2.0", 00:36:50.969 "id": 1, 00:36:50.969 "result": true 00:36:50.969 } 00:36:50.969 00:36:50.969 INFO: response: 00:36:50.969 { 00:36:50.969 "jsonrpc": "2.0", 00:36:50.969 "id": 1, 00:36:50.969 "result": true 00:36:50.969 } 00:36:50.969 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:50.969 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:50.969 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:50.969 INFO: Setting log level to 40 00:36:50.969 INFO: Setting log level to 40 00:36:50.969 INFO: Setting log level to 40 00:36:51.227 [2024-07-15 12:26:40.968638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.227 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.227 12:26:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:51.227 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:51.227 12:26:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:51.227 12:26:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:36:51.227 12:26:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.227 12:26:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.506 Nvme0n1 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.506 12:26:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.506 12:26:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.506 12:26:43 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.506 [2024-07-15 12:26:43.864653] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.506 12:26:43 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.506 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.506 [ 00:36:54.506 { 00:36:54.507 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:54.507 "subtype": "Discovery", 00:36:54.507 "listen_addresses": [], 00:36:54.507 "allow_any_host": true, 00:36:54.507 "hosts": [] 00:36:54.507 }, 00:36:54.507 { 00:36:54.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:54.507 "subtype": "NVMe", 00:36:54.507 "listen_addresses": [ 00:36:54.507 { 00:36:54.507 "trtype": "TCP", 00:36:54.507 "adrfam": "IPv4", 00:36:54.507 "traddr": "10.0.0.2", 00:36:54.507 "trsvcid": "4420" 00:36:54.507 } 00:36:54.507 ], 00:36:54.507 "allow_any_host": true, 00:36:54.507 "hosts": [], 00:36:54.507 "serial_number": "SPDK00000000000001", 00:36:54.507 "model_number": "SPDK bdev Controller", 00:36:54.507 "max_namespaces": 1, 00:36:54.507 "min_cntlid": 1, 00:36:54.507 "max_cntlid": 65519, 00:36:54.507 "namespaces": [ 00:36:54.507 { 00:36:54.507 "nsid": 1, 00:36:54.507 "bdev_name": "Nvme0n1", 00:36:54.507 "name": "Nvme0n1", 00:36:54.507 "nguid": "A05853CF0EEC4F27A91F983A085653C7", 00:36:54.507 "uuid": "a05853cf-0eec-4f27-a91f-983a085653c7" 00:36:54.507 } 00:36:54.507 ] 00:36:54.507 } 00:36:54.507 ] 00:36:54.507 12:26:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.507 12:26:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:54.507 12:26:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:54.507 12:26:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:54.507 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:54.507 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:54.507 12:26:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:54.507 rmmod nvme_tcp 00:36:54.507 rmmod nvme_fabrics 00:36:54.507 rmmod nvme_keyring 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1376391 ']' 00:36:54.507 12:26:44 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1376391 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1376391 ']' 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1376391 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1376391 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1376391' 00:36:54.507 killing process with pid 1376391 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1376391 00:36:54.507 12:26:44 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1376391 00:36:55.883 12:26:45 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:55.883 12:26:45 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:55.883 12:26:45 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:55.883 12:26:45 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:55.883 12:26:45 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:55.883 12:26:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.883 12:26:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:55.883 12:26:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.454 12:26:47 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:58.454 00:36:58.454 real 0m21.347s 00:36:58.454 user 0m27.336s 00:36:58.454 sys 0m4.976s 00:36:58.454 12:26:47 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:58.454 12:26:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:58.454 ************************************ 00:36:58.454 END TEST nvmf_identify_passthru 00:36:58.454 ************************************ 00:36:58.454 12:26:47 -- common/autotest_common.sh@1142 -- # return 0 00:36:58.454 12:26:47 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:58.454 12:26:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:58.454 12:26:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:58.454 12:26:47 -- common/autotest_common.sh@10 -- # set +x 00:36:58.454 ************************************ 00:36:58.454 START TEST nvmf_dif 00:36:58.454 ************************************ 00:36:58.454 12:26:47 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:58.454 * Looking for test storage... 00:36:58.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:58.454 12:26:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:58.454 12:26:48 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:58.454 12:26:48 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:58.454 12:26:48 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:58.454 12:26:48 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:58.454 12:26:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.454 12:26:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.454 12:26:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.454 12:26:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:58.454 12:26:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:58.455 12:26:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:58.455 12:26:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:58.455 12:26:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:58.455 12:26:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:58.455 12:26:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.455 12:26:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:58.455 12:26:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:58.455 12:26:48 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:36:58.455 12:26:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:03.731 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:03.731 12:26:53 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:03.732 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:03.732 Found net devices under 0000:86:00.0: cvl_0_0 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:03.732 Found net devices under 0000:86:00.1: cvl_0_1 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:03.732 12:26:53 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:03.991 12:26:53 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:03.991 12:26:53 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:03.991 12:26:53 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:03.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:03.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:37:03.991 00:37:03.991 --- 10.0.0.2 ping statistics --- 00:37:03.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:03.991 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:37:03.991 12:26:53 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:03.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:03.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:37:03.991 00:37:03.991 --- 10.0.0.1 ping statistics --- 00:37:03.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:03.991 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:37:03.991 12:26:53 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:03.991 12:26:53 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:37:03.991 12:26:53 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:03.991 12:26:53 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:06.517 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:37:06.517 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:37:06.517 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:06.775 12:26:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:06.775 12:26:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:06.775 12:26:56 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:06.775 12:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1381845 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1381845 00:37:06.775 12:26:56 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:06.775 12:26:56 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1381845 ']' 00:37:06.775 12:26:56 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.775 12:26:56 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:06.775 12:26:56 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.775 12:26:56 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:06.775 12:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:06.775 [2024-07-15 12:26:56.715146] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:37:06.775 [2024-07-15 12:26:56.715195] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:06.775 EAL: No free 2048 kB hugepages reported on node 1 00:37:07.034 [2024-07-15 12:26:56.787921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.034 [2024-07-15 12:26:56.830450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.034 [2024-07-15 12:26:56.830487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.034 [2024-07-15 12:26:56.830494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.034 [2024-07-15 12:26:56.830500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.034 [2024-07-15 12:26:56.830505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.034 [2024-07-15 12:26:56.830538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:37:07.034 12:26:56 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.034 12:26:56 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:07.034 12:26:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:07.034 12:26:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.034 [2024-07-15 12:26:56.955581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.034 12:26:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:07.034 12:26:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:07.034 ************************************ 00:37:07.034 START TEST fio_dif_1_default 00:37:07.034 ************************************ 00:37:07.034 12:26:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:37:07.034 12:26:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:07.034 12:26:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:07.034 12:26:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:07.034 12:26:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:07.034 12:26:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:07.034 12:26:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:07.034 12:26:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.034 12:26:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.034 bdev_null0 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:07.034 [2024-07-15 12:26:57.023859] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:07.034 12:26:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:07.034 { 00:37:07.034 "params": { 00:37:07.034 "name": "Nvme$subsystem", 00:37:07.034 "trtype": "$TEST_TRANSPORT", 00:37:07.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:07.034 "adrfam": "ipv4", 00:37:07.034 "trsvcid": "$NVMF_PORT", 00:37:07.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:07.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:07.034 "hdgst": ${hdgst:-false}, 00:37:07.034 "ddgst": ${ddgst:-false} 00:37:07.034 }, 00:37:07.035 "method": "bdev_nvme_attach_controller" 00:37:07.035 } 00:37:07.035 EOF 00:37:07.035 )") 00:37:07.035 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:07.035 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:07.035 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:07.035 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:07.035 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:07.035 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.035 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:37:07.035 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:07.035 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:07.305 "params": { 00:37:07.305 "name": "Nvme0", 00:37:07.305 "trtype": "tcp", 00:37:07.305 "traddr": "10.0.0.2", 00:37:07.305 "adrfam": "ipv4", 00:37:07.305 "trsvcid": "4420", 00:37:07.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:07.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:07.305 "hdgst": false, 00:37:07.305 "ddgst": false 00:37:07.305 }, 00:37:07.305 "method": "bdev_nvme_attach_controller" 00:37:07.305 }' 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:07.305 12:26:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:07.578 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:07.578 fio-3.35 00:37:07.578 Starting 1 thread 00:37:07.578 EAL: No free 2048 kB hugepages reported on node 1 00:37:19.804 00:37:19.804 filename0: (groupid=0, jobs=1): err= 0: pid=1382164: Mon Jul 15 12:27:07 2024 00:37:19.804 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10004msec) 00:37:19.804 slat (nsec): min=5587, max=46229, avg=6192.77, stdev=1332.25 00:37:19.804 clat (usec): min=424, max=45870, avg=21043.64, stdev=20504.21 00:37:19.804 lat (usec): min=430, max=45903, avg=21049.83, stdev=20504.14 00:37:19.804 clat percentiles (usec): 00:37:19.804 | 1.00th=[ 469], 5.00th=[ 482], 10.00th=[ 486], 20.00th=[ 494], 00:37:19.804 | 30.00th=[ 502], 40.00th=[ 510], 50.00th=[41157], 60.00th=[41157], 00:37:19.804 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:37:19.804 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:37:19.804 | 99.99th=[45876] 00:37:19.804 bw ( KiB/s): min= 704, max= 768, per=99.78%, avg=758.40, stdev=23.45, samples=20 00:37:19.804 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:37:19.804 lat (usec) : 500=28.53%, 750=21.37% 00:37:19.804 lat (msec) : 50=50.11% 00:37:19.804 cpu : usr=94.66%, sys=5.07%, ctx=15, majf=0, minf=262 00:37:19.804 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:19.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:19.804 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:19.804 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:19.804 00:37:19.804 Run status group 0 (all jobs): 00:37:19.804 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10004-10004msec 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 00:37:19.804 real 0m11.076s 00:37:19.804 user 0m16.138s 00:37:19.804 sys 0m0.837s 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 ************************************ 00:37:19.804 END TEST fio_dif_1_default 00:37:19.804 ************************************ 00:37:19.804 12:27:08 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:19.804 12:27:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:19.804 12:27:08 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:19.804 12:27:08 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 ************************************ 00:37:19.804 START TEST fio_dif_1_multi_subsystems 00:37:19.804 ************************************ 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 bdev_null0 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 [2024-07-15 12:27:08.173955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 bdev_null1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:19.804 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:19.804 { 00:37:19.804 "params": { 00:37:19.804 "name": "Nvme$subsystem", 00:37:19.804 "trtype": "$TEST_TRANSPORT", 00:37:19.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:19.804 "adrfam": "ipv4", 00:37:19.804 "trsvcid": "$NVMF_PORT", 00:37:19.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:19.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:19.805 "hdgst": ${hdgst:-false}, 00:37:19.805 "ddgst": ${ddgst:-false} 00:37:19.805 }, 00:37:19.805 "method": "bdev_nvme_attach_controller" 00:37:19.805 } 00:37:19.805 EOF 00:37:19.805 )") 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:19.805 { 00:37:19.805 "params": { 00:37:19.805 "name": "Nvme$subsystem", 00:37:19.805 "trtype": "$TEST_TRANSPORT", 00:37:19.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:19.805 "adrfam": "ipv4", 00:37:19.805 "trsvcid": "$NVMF_PORT", 00:37:19.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:19.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:19.805 "hdgst": ${hdgst:-false}, 00:37:19.805 "ddgst": ${ddgst:-false} 00:37:19.805 }, 00:37:19.805 "method": "bdev_nvme_attach_controller" 00:37:19.805 } 00:37:19.805 EOF 00:37:19.805 )") 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:19.805 "params": { 00:37:19.805 "name": "Nvme0", 00:37:19.805 "trtype": "tcp", 00:37:19.805 "traddr": "10.0.0.2", 00:37:19.805 "adrfam": "ipv4", 00:37:19.805 "trsvcid": "4420", 00:37:19.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:19.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:19.805 "hdgst": false, 00:37:19.805 "ddgst": false 00:37:19.805 }, 00:37:19.805 "method": "bdev_nvme_attach_controller" 00:37:19.805 },{ 00:37:19.805 "params": { 00:37:19.805 "name": "Nvme1", 00:37:19.805 "trtype": "tcp", 00:37:19.805 "traddr": "10.0.0.2", 00:37:19.805 "adrfam": "ipv4", 00:37:19.805 "trsvcid": "4420", 00:37:19.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:19.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:19.805 "hdgst": false, 00:37:19.805 "ddgst": false 00:37:19.805 }, 00:37:19.805 "method": "bdev_nvme_attach_controller" 00:37:19.805 }' 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:19.805 12:27:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:19.805 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:19.805 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:19.805 fio-3.35 00:37:19.805 Starting 2 threads 00:37:19.805 EAL: No free 2048 kB hugepages reported on node 1 00:37:29.854 00:37:29.854 filename0: (groupid=0, jobs=1): err= 0: pid=1384456: Mon Jul 15 12:27:19 2024 00:37:29.854 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10002msec) 00:37:29.854 slat (nsec): min=5956, max=62726, avg=8138.79, stdev=2974.85 00:37:29.854 clat (usec): min=492, max=42337, avg=21032.03, stdev=20153.86 00:37:29.854 lat (usec): min=499, max=42349, avg=21040.16, stdev=20153.45 00:37:29.854 clat percentiles (usec): 00:37:29.854 | 1.00th=[ 502], 5.00th=[ 603], 10.00th=[ 627], 20.00th=[ 898], 00:37:29.854 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[40633], 60.00th=[41157], 00:37:29.854 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:37:29.854 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:29.854 | 99.99th=[42206] 00:37:29.854 bw ( KiB/s): min= 704, max= 768, per=50.13%, avg=761.26, stdev=20.18, samples=19 00:37:29.854 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:37:29.854 lat (usec) : 500=0.74%, 750=17.26%, 1000=25.26% 00:37:29.854 lat (msec) : 2=6.63%, 50=50.11% 00:37:29.854 cpu : usr=97.50%, sys=2.23%, ctx=14, majf=0, minf=169 00:37:29.855 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:29.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.855 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.855 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:29.855 filename1: (groupid=0, jobs=1): err= 0: pid=1384457: Mon Jul 15 12:27:19 2024 00:37:29.855 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:37:29.855 slat (nsec): min=5956, max=61222, avg=7992.90, stdev=2775.93 00:37:29.855 clat (usec): min=471, max=42498, avg=21077.19, stdev=20325.39 00:37:29.855 lat (usec): min=477, max=42505, avg=21085.18, stdev=20324.94 00:37:29.855 clat percentiles (usec): 00:37:29.855 | 1.00th=[ 498], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 619], 00:37:29.855 | 30.00th=[ 627], 40.00th=[ 725], 50.00th=[40633], 60.00th=[41157], 00:37:29.855 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:37:29.855 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:37:29.855 | 99.99th=[42730] 00:37:29.855 bw ( KiB/s): min= 672, max= 768, per=50.00%, avg=759.58, stdev=25.78, samples=19 00:37:29.855 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:37:29.855 lat (usec) : 500=1.27%, 750=39.77%, 1000=5.80% 00:37:29.855 lat (msec) : 2=2.95%, 50=50.21% 00:37:29.855 cpu : usr=97.70%, sys=2.04%, ctx=13, majf=0, minf=92 00:37:29.855 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:29.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.855 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.855 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:29.855 00:37:29.855 Run status group 0 (all jobs): 00:37:29.855 READ: bw=1518KiB/s (1555kB/s), 758KiB/s-760KiB/s (776kB/s-778kB/s), io=14.8MiB (15.5MB), run=10002-10002msec 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.855 00:37:29.855 real 0m11.548s 00:37:29.855 user 0m26.670s 00:37:29.855 sys 0m0.795s 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 ************************************ 00:37:29.855 END TEST fio_dif_1_multi_subsystems 00:37:29.855 ************************************ 00:37:29.855 12:27:19 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:29.855 12:27:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:29.855 12:27:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:29.855 12:27:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 ************************************ 00:37:29.855 START TEST fio_dif_rand_params 00:37:29.855 ************************************ 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 bdev_null0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:29.855 [2024-07-15 12:27:19.802192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:29.855 { 00:37:29.855 "params": { 00:37:29.855 "name": "Nvme$subsystem", 00:37:29.855 "trtype": "$TEST_TRANSPORT", 00:37:29.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:29.855 "adrfam": "ipv4", 00:37:29.855 "trsvcid": "$NVMF_PORT", 00:37:29.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:29.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:29.855 "hdgst": ${hdgst:-false}, 00:37:29.855 "ddgst": ${ddgst:-false} 00:37:29.855 }, 00:37:29.855 "method": "bdev_nvme_attach_controller" 00:37:29.855 } 00:37:29.855 EOF 00:37:29.855 )") 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:29.855 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:30.131 "params": { 00:37:30.131 "name": "Nvme0", 00:37:30.131 "trtype": "tcp", 00:37:30.131 "traddr": "10.0.0.2", 00:37:30.131 "adrfam": "ipv4", 00:37:30.131 "trsvcid": "4420", 00:37:30.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:30.131 "hdgst": false, 00:37:30.131 "ddgst": false 00:37:30.131 }, 00:37:30.131 "method": "bdev_nvme_attach_controller" 00:37:30.131 }' 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:30.131 12:27:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:30.392 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:30.392 ... 00:37:30.392 fio-3.35 00:37:30.392 Starting 3 threads 00:37:30.392 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.947 00:37:36.947 filename0: (groupid=0, jobs=1): err= 0: pid=1386452: Mon Jul 15 12:27:25 2024 00:37:36.947 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(183MiB/5045msec) 00:37:36.947 slat (nsec): min=6264, max=26749, avg=10158.52, stdev=2654.10 00:37:36.947 clat (usec): min=3698, max=88525, avg=10281.27, stdev=10702.88 00:37:36.947 lat (usec): min=3706, max=88533, avg=10291.43, stdev=10702.92 00:37:36.947 clat percentiles (usec): 00:37:36.947 | 1.00th=[ 3982], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 5669], 00:37:36.947 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7504], 60.00th=[ 8291], 00:37:36.947 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11338], 95.00th=[46924], 00:37:36.947 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53216], 99.95th=[88605], 00:37:36.947 | 99.99th=[88605] 00:37:36.947 bw ( KiB/s): min=16896, max=43520, per=34.63%, avg=37478.40, stdev=7868.46, samples=10 00:37:36.947 iops : min= 132, max= 340, avg=292.80, stdev=61.47, samples=10 00:37:36.947 lat (msec) : 4=1.09%, 10=79.26%, 20=12.62%, 50=5.93%, 100=1.09% 00:37:36.947 cpu : usr=94.87%, sys=4.84%, ctx=12, majf=0, minf=39 00:37:36.947 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:36.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.947 issued rwts: total=1466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.947 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:36.947 filename0: (groupid=0, jobs=1): err= 0: pid=1386453: Mon Jul 15 12:27:25 2024 00:37:36.947 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(171MiB/5002msec) 00:37:36.947 slat (nsec): min=6211, max=24718, avg=10429.40, stdev=2548.91 00:37:36.947 clat (usec): min=3631, max=51162, avg=10939.55, stdev=10844.83 00:37:36.947 lat (usec): min=3641, max=51174, avg=10949.98, stdev=10844.74 00:37:36.947 clat percentiles (usec): 00:37:36.947 | 1.00th=[ 4146], 5.00th=[ 4686], 10.00th=[ 5538], 20.00th=[ 6259], 00:37:36.947 | 30.00th=[ 6587], 40.00th=[ 7111], 50.00th=[ 8094], 60.00th=[ 8717], 00:37:36.947 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11863], 95.00th=[47449], 00:37:36.947 | 99.00th=[50070], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:37:36.947 | 99.99th=[51119] 00:37:36.947 bw ( KiB/s): min=27904, max=41728, per=31.35%, avg=33934.22, stdev=4274.34, samples=9 00:37:36.947 iops : min= 218, max= 326, avg=265.11, stdev=33.39, samples=9 00:37:36.947 lat (msec) : 4=0.51%, 10=76.64%, 20=15.18%, 50=6.72%, 100=0.95% 00:37:36.947 cpu : usr=94.78%, sys=4.90%, ctx=10, majf=0, minf=117 00:37:36.947 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:36.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.947 issued rwts: total=1370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.947 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:36.947 filename0: (groupid=0, jobs=1): err= 0: pid=1386454: Mon Jul 15 12:27:25 2024 00:37:36.947 read: IOPS=283, BW=35.4MiB/s (37.2MB/s)(179MiB/5044msec) 00:37:36.947 slat (nsec): min=6262, max=22539, avg=9944.35, stdev=2532.32 00:37:36.947 clat (usec): min=3287, max=52934, avg=10538.25, stdev=10633.67 00:37:36.947 lat (usec): min=3294, max=52956, avg=10548.20, stdev=10633.90 00:37:36.947 clat percentiles (usec): 00:37:36.947 | 1.00th=[ 3949], 5.00th=[ 4113], 10.00th=[ 4359], 20.00th=[ 6128], 00:37:36.947 | 30.00th=[ 6587], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8586], 00:37:36.947 | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[11731], 95.00th=[46924], 00:37:36.947 | 99.00th=[50070], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:37:36.947 | 99.99th=[52691] 00:37:36.947 bw ( KiB/s): min=32256, max=54016, per=33.78%, avg=36563.50, stdev=6615.40, samples=10 00:37:36.947 iops : min= 252, max= 422, avg=285.60, stdev=51.71, samples=10 00:37:36.947 lat (msec) : 4=1.26%, 10=77.55%, 20=14.13%, 50=5.80%, 100=1.26% 00:37:36.947 cpu : usr=95.44%, sys=4.24%, ctx=10, majf=0, minf=100 00:37:36.947 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:36.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.948 issued rwts: total=1430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.948 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:36.948 00:37:36.948 Run status group 0 (all jobs): 00:37:36.948 READ: bw=106MiB/s (111MB/s), 34.2MiB/s-36.3MiB/s (35.9MB/s-38.1MB/s), io=533MiB (559MB), run=5002-5045msec 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 bdev_null0 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 [2024-07-15 12:27:25.955631] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 bdev_null1 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 bdev_null2 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:36.948 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:36.949 { 00:37:36.949 "params": { 00:37:36.949 "name": "Nvme$subsystem", 00:37:36.949 "trtype": "$TEST_TRANSPORT", 00:37:36.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.949 "adrfam": "ipv4", 00:37:36.949 "trsvcid": "$NVMF_PORT", 00:37:36.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.949 "hdgst": ${hdgst:-false}, 00:37:36.949 "ddgst": ${ddgst:-false} 00:37:36.949 }, 00:37:36.949 "method": "bdev_nvme_attach_controller" 00:37:36.949 } 00:37:36.949 EOF 00:37:36.949 )") 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:36.949 { 00:37:36.949 "params": { 00:37:36.949 "name": "Nvme$subsystem", 00:37:36.949 "trtype": "$TEST_TRANSPORT", 00:37:36.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.949 "adrfam": "ipv4", 00:37:36.949 "trsvcid": "$NVMF_PORT", 00:37:36.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.949 "hdgst": ${hdgst:-false}, 00:37:36.949 "ddgst": ${ddgst:-false} 00:37:36.949 }, 00:37:36.949 "method": "bdev_nvme_attach_controller" 00:37:36.949 } 00:37:36.949 EOF 00:37:36.949 )") 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:36.949 { 00:37:36.949 "params": { 00:37:36.949 "name": "Nvme$subsystem", 00:37:36.949 "trtype": "$TEST_TRANSPORT", 00:37:36.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.949 "adrfam": "ipv4", 00:37:36.949 "trsvcid": "$NVMF_PORT", 00:37:36.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.949 "hdgst": ${hdgst:-false}, 00:37:36.949 "ddgst": ${ddgst:-false} 00:37:36.949 }, 00:37:36.949 "method": "bdev_nvme_attach_controller" 00:37:36.949 } 00:37:36.949 EOF 00:37:36.949 )") 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:36.949 "params": { 00:37:36.949 "name": "Nvme0", 00:37:36.949 "trtype": "tcp", 00:37:36.949 "traddr": "10.0.0.2", 00:37:36.949 "adrfam": "ipv4", 00:37:36.949 "trsvcid": "4420", 00:37:36.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.949 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.949 "hdgst": false, 00:37:36.949 "ddgst": false 00:37:36.949 }, 00:37:36.949 "method": "bdev_nvme_attach_controller" 00:37:36.949 },{ 00:37:36.949 "params": { 00:37:36.949 "name": "Nvme1", 00:37:36.949 "trtype": "tcp", 00:37:36.949 "traddr": "10.0.0.2", 00:37:36.949 "adrfam": "ipv4", 00:37:36.949 "trsvcid": "4420", 00:37:36.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:36.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:36.949 "hdgst": false, 00:37:36.949 "ddgst": false 00:37:36.949 }, 00:37:36.949 "method": "bdev_nvme_attach_controller" 00:37:36.949 },{ 00:37:36.949 "params": { 00:37:36.949 "name": "Nvme2", 00:37:36.949 "trtype": "tcp", 00:37:36.949 "traddr": "10.0.0.2", 00:37:36.949 "adrfam": "ipv4", 00:37:36.949 "trsvcid": "4420", 00:37:36.949 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:36.949 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:36.949 "hdgst": false, 00:37:36.949 "ddgst": false 00:37:36.949 }, 00:37:36.949 "method": "bdev_nvme_attach_controller" 00:37:36.949 }' 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:36.949 12:27:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.949 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:36.949 ... 00:37:36.949 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:36.949 ... 00:37:36.949 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:36.949 ... 00:37:36.949 fio-3.35 00:37:36.949 Starting 24 threads 00:37:36.949 EAL: No free 2048 kB hugepages reported on node 1 00:37:49.154 00:37:49.154 filename0: (groupid=0, jobs=1): err= 0: pid=1387713: Mon Jul 15 12:27:37 2024 00:37:49.154 read: IOPS=575, BW=2304KiB/s (2359kB/s)(22.5MiB/10002msec) 00:37:49.154 slat (nsec): min=6914, max=83264, avg=24729.25, stdev=19285.41 00:37:49.154 clat (usec): min=4785, max=40179, avg=27574.07, stdev=2695.90 00:37:49.154 lat (usec): min=4799, max=40191, avg=27598.80, stdev=2696.02 00:37:49.154 clat percentiles (usec): 00:37:49.154 | 1.00th=[ 6456], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:37:49.154 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:37:49.154 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:37:49.154 | 99.00th=[29230], 99.50th=[30016], 99.90th=[40109], 99.95th=[40109], 00:37:49.154 | 99.99th=[40109] 00:37:49.154 bw ( KiB/s): min= 2176, max= 2688, per=4.25%, avg=2303.21, stdev=104.53, samples=19 00:37:49.154 iops : min= 544, max= 672, avg=575.68, stdev=26.14, samples=19 00:37:49.154 lat (msec) : 10=1.39%, 20=0.28%, 50=98.33% 00:37:49.154 cpu : usr=98.76%, sys=0.84%, ctx=20, majf=0, minf=67 00:37:49.154 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:49.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.154 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.154 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.154 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.154 filename0: (groupid=0, jobs=1): err= 0: pid=1387714: Mon Jul 15 12:27:37 2024 00:37:49.154 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.3MiB/10020msec) 00:37:49.154 slat (nsec): min=6402, max=94531, avg=40610.83, stdev=25042.30 00:37:49.154 clat (usec): min=12358, max=91039, avg=27697.00, stdev=4633.54 00:37:49.154 lat (usec): min=12381, max=91094, avg=27737.61, stdev=4635.23 00:37:49.154 clat percentiles (usec): 00:37:49.154 | 1.00th=[17957], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:37:49.154 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:37:49.154 | 70.00th=[27657], 80.00th=[27919], 90.00th=[27919], 95.00th=[28443], 00:37:49.154 | 99.00th=[34341], 99.50th=[73925], 99.90th=[90702], 99.95th=[90702], 00:37:49.154 | 99.99th=[90702] 00:37:49.154 bw ( KiB/s): min= 2048, max= 2544, per=4.19%, avg=2273.60, stdev=99.80, samples=20 00:37:49.154 iops : min= 512, max= 636, avg=568.25, stdev=24.91, samples=20 00:37:49.154 lat (msec) : 20=1.75%, 50=97.69%, 100=0.56% 00:37:49.154 cpu : usr=98.95%, sys=0.67%, ctx=12, majf=0, minf=29 00:37:49.154 IO depths : 1=5.6%, 2=11.2%, 4=22.8%, 8=53.3%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:49.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.154 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.154 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.154 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.154 filename0: (groupid=0, jobs=1): err= 0: pid=1387715: Mon Jul 15 12:27:37 2024 00:37:49.154 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.2MiB/10046msec) 00:37:49.154 slat (nsec): min=7589, max=80298, avg=35900.65, stdev=14621.09 00:37:49.154 clat (usec): min=25150, max=90989, avg=27977.68, stdev=3487.52 00:37:49.154 lat (usec): min=25162, max=91013, avg=28013.59, stdev=3487.53 00:37:49.154 clat percentiles (usec): 00:37:49.154 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:37:49.154 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:37:49.154 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:37:49.154 | 99.00th=[29492], 99.50th=[45876], 99.90th=[90702], 99.95th=[90702], 00:37:49.154 | 99.99th=[90702] 00:37:49.154 bw ( KiB/s): min= 2176, max= 2308, per=4.18%, avg=2266.20, stdev=60.15, samples=20 00:37:49.154 iops : min= 544, max= 577, avg=566.55, stdev=15.04, samples=20 00:37:49.154 lat (msec) : 50=99.72%, 100=0.28% 00:37:49.154 cpu : usr=97.43%, sys=1.47%, ctx=236, majf=0, minf=51 00:37:49.154 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:49.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.154 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.154 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.154 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.154 filename0: (groupid=0, jobs=1): err= 0: pid=1387716: Mon Jul 15 12:27:37 2024 00:37:49.154 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.5MiB/10091msec) 00:37:49.154 slat (nsec): min=6568, max=63807, avg=23434.01, stdev=9894.91 00:37:49.154 clat (usec): min=5677, max=92113, avg=27810.06, stdev=4095.35 00:37:49.154 lat (usec): min=5695, max=92151, avg=27833.49, stdev=4095.83 00:37:49.154 clat percentiles (usec): 00:37:49.154 | 1.00th=[ 7898], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:37:49.154 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:37:49.154 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:37:49.154 | 99.00th=[29754], 99.50th=[40109], 99.90th=[90702], 99.95th=[90702], 00:37:49.154 | 99.99th=[91751] 00:37:49.155 bw ( KiB/s): min= 2176, max= 2565, per=4.24%, avg=2297.10, stdev=78.27, samples=20 00:37:49.155 iops : min= 544, max= 641, avg=574.15, stdev=19.52, samples=20 00:37:49.155 lat (msec) : 10=1.11%, 20=0.28%, 50=98.33%, 100=0.28% 00:37:49.155 cpu : usr=98.82%, sys=0.81%, ctx=18, majf=0, minf=56 00:37:49.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:49.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.155 filename0: (groupid=0, jobs=1): err= 0: pid=1387717: Mon Jul 15 12:27:37 2024 00:37:49.155 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.2MiB/10045msec) 00:37:49.155 slat (nsec): min=7670, max=94431, avg=45927.11, stdev=22604.61 00:37:49.155 clat (usec): min=19990, max=90962, avg=27920.05, stdev=3501.91 00:37:49.155 lat (usec): min=20014, max=91019, avg=27965.98, stdev=3501.63 00:37:49.155 clat percentiles (usec): 00:37:49.155 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:37:49.155 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:37:49.155 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:37:49.155 | 99.00th=[29754], 99.50th=[45876], 99.90th=[90702], 99.95th=[90702], 00:37:49.155 | 99.99th=[90702] 00:37:49.155 bw ( KiB/s): min= 2176, max= 2308, per=4.18%, avg=2266.20, stdev=60.15, samples=20 00:37:49.155 iops : min= 544, max= 577, avg=566.55, stdev=15.04, samples=20 00:37:49.155 lat (msec) : 20=0.02%, 50=99.67%, 100=0.32% 00:37:49.155 cpu : usr=98.74%, sys=0.88%, ctx=18, majf=0, minf=45 00:37:49.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:49.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.155 filename0: (groupid=0, jobs=1): err= 0: pid=1387718: Mon Jul 15 12:27:37 2024 00:37:49.155 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.2MiB/10045msec) 00:37:49.155 slat (nsec): min=7353, max=92372, avg=38435.99, stdev=23312.82 00:37:49.155 clat (usec): min=22002, max=90623, avg=28011.72, stdev=3473.51 00:37:49.155 lat (usec): min=22010, max=90672, avg=28050.16, stdev=3473.22 00:37:49.155 clat percentiles (usec): 00:37:49.155 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:37:49.155 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:37:49.155 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:37:49.155 | 99.00th=[30016], 99.50th=[45351], 99.90th=[90702], 99.95th=[90702], 00:37:49.155 | 99.99th=[90702] 00:37:49.155 bw ( KiB/s): min= 2176, max= 2308, per=4.18%, avg=2266.20, stdev=58.51, samples=20 00:37:49.155 iops : min= 544, max= 577, avg=566.55, stdev=14.63, samples=20 00:37:49.155 lat (msec) : 50=99.72%, 100=0.28% 00:37:49.155 cpu : usr=98.89%, sys=0.71%, ctx=12, majf=0, minf=38 00:37:49.155 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:49.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.155 filename0: (groupid=0, jobs=1): err= 0: pid=1387719: Mon Jul 15 12:27:37 2024 00:37:49.155 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.1MiB/10047msec) 00:37:49.155 slat (nsec): min=6767, max=83738, avg=29599.86, stdev=18977.42 00:37:49.155 clat (msec): min=13, max=101, avg=28.11, stdev= 4.35 00:37:49.155 lat (msec): min=13, max=101, avg=28.14, stdev= 4.35 00:37:49.155 clat percentiles (msec): 00:37:49.155 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:37:49.155 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:37:49.155 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:37:49.155 | 99.00th=[ 30], 99.50th=[ 64], 99.90th=[ 102], 99.95th=[ 102], 00:37:49.155 | 99.99th=[ 102] 00:37:49.155 bw ( KiB/s): min= 2043, max= 2304, per=4.16%, avg=2258.20, stdev=86.12, samples=20 00:37:49.155 iops : min= 510, max= 576, avg=564.40, stdev=21.58, samples=20 00:37:49.155 lat (msec) : 20=0.04%, 50=99.40%, 100=0.28%, 250=0.28% 00:37:49.155 cpu : usr=98.83%, sys=0.79%, ctx=24, majf=0, minf=35 00:37:49.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:49.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.155 filename0: (groupid=0, jobs=1): err= 0: pid=1387720: Mon Jul 15 12:27:37 2024 00:37:49.155 read: IOPS=564, BW=2258KiB/s (2312kB/s)(22.1MiB/10026msec) 00:37:49.155 slat (usec): min=6, max=103, avg=43.78, stdev=23.85 00:37:49.155 clat (usec): min=21291, max=91014, avg=27894.43, stdev=3853.29 00:37:49.155 lat (usec): min=21299, max=91057, avg=27938.21, stdev=3853.54 00:37:49.155 clat percentiles (usec): 00:37:49.155 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:37:49.155 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:37:49.155 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:37:49.155 | 99.00th=[30016], 99.50th=[57410], 99.90th=[90702], 99.95th=[90702], 00:37:49.155 | 99.99th=[90702] 00:37:49.155 bw ( KiB/s): min= 2016, max= 2304, per=4.16%, avg=2256.35, stdev=90.67, samples=20 00:37:49.155 iops : min= 504, max= 576, avg=563.90, stdev=22.74, samples=20 00:37:49.155 lat (msec) : 50=99.43%, 100=0.57% 00:37:49.155 cpu : usr=98.58%, sys=1.02%, ctx=17, majf=0, minf=38 00:37:49.155 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:49.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 issued rwts: total=5660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.155 filename1: (groupid=0, jobs=1): err= 0: pid=1387721: Mon Jul 15 12:27:37 2024 00:37:49.155 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.2MiB/10045msec) 00:37:49.155 slat (nsec): min=6811, max=89539, avg=17718.30, stdev=13306.54 00:37:49.155 clat (usec): min=25351, max=90482, avg=28152.73, stdev=3447.07 00:37:49.155 lat (usec): min=25368, max=90502, avg=28170.45, stdev=3447.40 00:37:49.155 clat percentiles (usec): 00:37:49.155 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:37:49.155 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:37:49.155 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:37:49.155 | 99.00th=[29754], 99.50th=[45351], 99.90th=[90702], 99.95th=[90702], 00:37:49.155 | 99.99th=[90702] 00:37:49.155 bw ( KiB/s): min= 2176, max= 2308, per=4.18%, avg=2266.20, stdev=60.15, samples=20 00:37:49.155 iops : min= 544, max= 577, avg=566.55, stdev=15.04, samples=20 00:37:49.155 lat (msec) : 50=99.72%, 100=0.28% 00:37:49.155 cpu : usr=98.99%, sys=0.60%, ctx=41, majf=0, minf=62 00:37:49.155 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:49.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.155 filename1: (groupid=0, jobs=1): err= 0: pid=1387722: Mon Jul 15 12:27:37 2024 00:37:49.155 read: IOPS=564, BW=2258KiB/s (2312kB/s)(22.1MiB/10044msec) 00:37:49.155 slat (nsec): min=6579, max=83848, avg=31307.61, stdev=18888.16 00:37:49.155 clat (msec): min=14, max=101, avg=28.04, stdev= 4.57 00:37:49.155 lat (msec): min=15, max=101, avg=28.08, stdev= 4.57 00:37:49.155 clat percentiles (msec): 00:37:49.155 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:37:49.155 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:37:49.155 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:37:49.155 | 99.00th=[ 41], 99.50th=[ 60], 99.90th=[ 102], 99.95th=[ 102], 00:37:49.155 | 99.99th=[ 102] 00:37:49.155 bw ( KiB/s): min= 2043, max= 2352, per=4.17%, avg=2260.35, stdev=88.68, samples=20 00:37:49.155 iops : min= 510, max= 588, avg=564.90, stdev=22.26, samples=20 00:37:49.155 lat (msec) : 20=0.92%, 50=98.52%, 100=0.28%, 250=0.28% 00:37:49.155 cpu : usr=98.77%, sys=0.83%, ctx=11, majf=0, minf=60 00:37:49.155 IO depths : 1=5.1%, 2=11.3%, 4=24.8%, 8=51.4%, 16=7.4%, 32=0.0%, >=64=0.0% 00:37:49.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 issued rwts: total=5670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.155 filename1: (groupid=0, jobs=1): err= 0: pid=1387723: Mon Jul 15 12:27:37 2024 00:37:49.155 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.1MiB/10048msec) 00:37:49.155 slat (nsec): min=6950, max=88825, avg=28386.36, stdev=19407.43 00:37:49.155 clat (msec): min=15, max=101, avg=28.13, stdev= 4.40 00:37:49.155 lat (msec): min=15, max=101, avg=28.16, stdev= 4.40 00:37:49.155 clat percentiles (msec): 00:37:49.155 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:37:49.155 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:37:49.155 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:37:49.155 | 99.00th=[ 30], 99.50th=[ 65], 99.90th=[ 102], 99.95th=[ 102], 00:37:49.155 | 99.99th=[ 102] 00:37:49.155 bw ( KiB/s): min= 2043, max= 2304, per=4.16%, avg=2257.95, stdev=85.99, samples=20 00:37:49.155 iops : min= 510, max= 576, avg=564.30, stdev=21.52, samples=20 00:37:49.155 lat (msec) : 20=0.07%, 50=99.36%, 100=0.28%, 250=0.28% 00:37:49.155 cpu : usr=98.56%, sys=1.05%, ctx=18, majf=0, minf=39 00:37:49.155 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:49.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.155 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.155 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.155 filename1: (groupid=0, jobs=1): err= 0: pid=1387724: Mon Jul 15 12:27:37 2024 00:37:49.155 read: IOPS=572, BW=2288KiB/s (2343kB/s)(22.6MiB/10091msec) 00:37:49.155 slat (nsec): min=6715, max=83379, avg=31217.48, stdev=20153.98 00:37:49.155 clat (usec): min=4050, max=90518, avg=27642.68, stdev=4048.18 00:37:49.155 lat (usec): min=4060, max=90534, avg=27673.90, stdev=4049.28 00:37:49.155 clat percentiles (usec): 00:37:49.155 | 1.00th=[ 6128], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:37:49.155 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:37:49.155 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:37:49.155 | 99.00th=[29492], 99.50th=[40633], 99.90th=[90702], 99.95th=[90702], 00:37:49.155 | 99.99th=[90702] 00:37:49.155 bw ( KiB/s): min= 2176, max= 2688, per=4.25%, avg=2303.25, stdev=101.74, samples=20 00:37:49.155 iops : min= 544, max= 672, avg=575.70, stdev=25.44, samples=20 00:37:49.156 lat (msec) : 10=1.39%, 20=0.28%, 50=98.11%, 100=0.23% 00:37:49.156 cpu : usr=98.92%, sys=0.69%, ctx=8, majf=0, minf=42 00:37:49.156 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 issued rwts: total=5773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.156 filename1: (groupid=0, jobs=1): err= 0: pid=1387725: Mon Jul 15 12:27:37 2024 00:37:49.156 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.4MiB/10066msec) 00:37:49.156 slat (nsec): min=6888, max=94391, avg=21383.67, stdev=18165.32 00:37:49.156 clat (usec): min=5580, max=90149, avg=27947.69, stdev=3816.26 00:37:49.156 lat (usec): min=5592, max=90177, avg=27969.07, stdev=3817.27 00:37:49.156 clat percentiles (usec): 00:37:49.156 | 1.00th=[25297], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:37:49.156 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:37:49.156 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:37:49.156 | 99.00th=[29754], 99.50th=[41157], 99.90th=[89654], 99.95th=[89654], 00:37:49.156 | 99.99th=[89654] 00:37:49.156 bw ( KiB/s): min= 2176, max= 2436, per=4.21%, avg=2284.25, stdev=62.93, samples=20 00:37:49.156 iops : min= 544, max= 609, avg=570.95, stdev=15.71, samples=20 00:37:49.156 lat (msec) : 10=0.56%, 20=0.28%, 50=98.88%, 100=0.28% 00:37:49.156 cpu : usr=98.07%, sys=1.52%, ctx=32, majf=0, minf=49 00:37:49.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.156 filename1: (groupid=0, jobs=1): err= 0: pid=1387726: Mon Jul 15 12:27:37 2024 00:37:49.156 read: IOPS=564, BW=2257KiB/s (2311kB/s)(22.1MiB/10039msec) 00:37:49.156 slat (nsec): min=6092, max=83387, avg=30390.65, stdev=19100.89 00:37:49.156 clat (msec): min=26, max=101, avg=28.04, stdev= 4.15 00:37:49.156 lat (msec): min=26, max=101, avg=28.07, stdev= 4.15 00:37:49.156 clat percentiles (msec): 00:37:49.156 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:37:49.156 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:37:49.156 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:37:49.156 | 99.00th=[ 30], 99.50th=[ 55], 99.90th=[ 102], 99.95th=[ 102], 00:37:49.156 | 99.99th=[ 102] 00:37:49.156 bw ( KiB/s): min= 2043, max= 2304, per=4.16%, avg=2258.15, stdev=85.87, samples=20 00:37:49.156 iops : min= 510, max= 576, avg=564.35, stdev=21.55, samples=20 00:37:49.156 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:37:49.156 cpu : usr=98.88%, sys=0.72%, ctx=14, majf=0, minf=36 00:37:49.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.156 filename1: (groupid=0, jobs=1): err= 0: pid=1387727: Mon Jul 15 12:27:37 2024 00:37:49.156 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.2MiB/10045msec) 00:37:49.156 slat (nsec): min=7619, max=99825, avg=45282.29, stdev=23018.86 00:37:49.156 clat (usec): min=19961, max=90967, avg=27902.70, stdev=3505.83 00:37:49.156 lat (usec): min=19983, max=91025, avg=27947.99, stdev=3506.12 00:37:49.156 clat percentiles (usec): 00:37:49.156 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:37:49.156 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:37:49.156 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:37:49.156 | 99.00th=[29754], 99.50th=[45876], 99.90th=[90702], 99.95th=[90702], 00:37:49.156 | 99.99th=[90702] 00:37:49.156 bw ( KiB/s): min= 2176, max= 2308, per=4.18%, avg=2266.20, stdev=60.15, samples=20 00:37:49.156 iops : min= 544, max= 577, avg=566.55, stdev=15.04, samples=20 00:37:49.156 lat (msec) : 20=0.04%, 50=99.65%, 100=0.32% 00:37:49.156 cpu : usr=98.78%, sys=0.83%, ctx=15, majf=0, minf=32 00:37:49.156 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.156 filename1: (groupid=0, jobs=1): err= 0: pid=1387728: Mon Jul 15 12:27:37 2024 00:37:49.156 read: IOPS=564, BW=2258KiB/s (2312kB/s)(22.2MiB/10061msec) 00:37:49.156 slat (usec): min=7, max=121, avg=31.69, stdev=22.71 00:37:49.156 clat (msec): min=21, max=106, avg=28.10, stdev= 3.62 00:37:49.156 lat (msec): min=21, max=106, avg=28.13, stdev= 3.62 00:37:49.156 clat percentiles (msec): 00:37:49.156 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:37:49.156 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:37:49.156 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:37:49.156 | 99.00th=[ 34], 99.50th=[ 46], 99.90th=[ 91], 99.95th=[ 91], 00:37:49.156 | 99.99th=[ 107] 00:37:49.156 bw ( KiB/s): min= 2176, max= 2308, per=4.18%, avg=2266.20, stdev=55.19, samples=20 00:37:49.156 iops : min= 544, max= 577, avg=566.55, stdev=13.80, samples=20 00:37:49.156 lat (msec) : 50=99.72%, 100=0.25%, 250=0.04% 00:37:49.156 cpu : usr=98.80%, sys=0.82%, ctx=7, majf=0, minf=54 00:37:49.156 IO depths : 1=0.3%, 2=6.4%, 4=24.8%, 8=56.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:37:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.156 filename2: (groupid=0, jobs=1): err= 0: pid=1387729: Mon Jul 15 12:27:37 2024 00:37:49.156 read: IOPS=571, BW=2288KiB/s (2342kB/s)(22.5MiB/10065msec) 00:37:49.156 slat (nsec): min=6872, max=83249, avg=29672.57, stdev=20494.89 00:37:49.156 clat (usec): min=11976, max=90830, avg=27659.38, stdev=4269.96 00:37:49.156 lat (usec): min=11995, max=90872, avg=27689.05, stdev=4271.91 00:37:49.156 clat percentiles (usec): 00:37:49.156 | 1.00th=[16909], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:37:49.156 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:37:49.156 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:37:49.156 | 99.00th=[34866], 99.50th=[61604], 99.90th=[90702], 99.95th=[90702], 00:37:49.156 | 99.99th=[90702] 00:37:49.156 bw ( KiB/s): min= 2048, max= 2741, per=4.23%, avg=2296.65, stdev=135.91, samples=20 00:37:49.156 iops : min= 512, max= 685, avg=574.15, stdev=33.93, samples=20 00:37:49.156 lat (msec) : 20=2.36%, 50=97.08%, 100=0.56% 00:37:49.156 cpu : usr=98.71%, sys=0.89%, ctx=16, majf=0, minf=48 00:37:49.156 IO depths : 1=5.8%, 2=11.7%, 4=23.9%, 8=51.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 issued rwts: total=5756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.156 filename2: (groupid=0, jobs=1): err= 0: pid=1387730: Mon Jul 15 12:27:37 2024 00:37:49.156 read: IOPS=563, BW=2255KiB/s (2309kB/s)(22.1MiB/10049msec) 00:37:49.156 slat (nsec): min=7299, max=68748, avg=25292.36, stdev=10504.07 00:37:49.156 clat (msec): min=13, max=104, avg=28.15, stdev= 4.37 00:37:49.156 lat (msec): min=13, max=104, avg=28.18, stdev= 4.37 00:37:49.156 clat percentiles (msec): 00:37:49.156 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:37:49.156 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:37:49.156 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:37:49.156 | 99.00th=[ 31], 99.50th=[ 62], 99.90th=[ 102], 99.95th=[ 102], 00:37:49.156 | 99.99th=[ 105] 00:37:49.156 bw ( KiB/s): min= 2043, max= 2304, per=4.16%, avg=2258.40, stdev=85.61, samples=20 00:37:49.156 iops : min= 510, max= 576, avg=564.45, stdev=21.45, samples=20 00:37:49.156 lat (msec) : 20=0.16%, 50=99.28%, 100=0.28%, 250=0.28% 00:37:49.156 cpu : usr=98.67%, sys=0.96%, ctx=33, majf=0, minf=48 00:37:49.156 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.156 filename2: (groupid=0, jobs=1): err= 0: pid=1387731: Mon Jul 15 12:27:37 2024 00:37:49.156 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.5MiB/10090msec) 00:37:49.156 slat (nsec): min=6921, max=83305, avg=31881.78, stdev=20072.05 00:37:49.156 clat (usec): min=5701, max=91990, avg=27706.57, stdev=4104.87 00:37:49.156 lat (usec): min=5712, max=92004, avg=27738.45, stdev=4105.87 00:37:49.156 clat percentiles (usec): 00:37:49.156 | 1.00th=[ 7832], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:37:49.156 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:37:49.156 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:37:49.156 | 99.00th=[30016], 99.50th=[40109], 99.90th=[90702], 99.95th=[90702], 00:37:49.156 | 99.99th=[91751] 00:37:49.156 bw ( KiB/s): min= 2176, max= 2560, per=4.23%, avg=2296.85, stdev=77.37, samples=20 00:37:49.156 iops : min= 544, max= 640, avg=574.10, stdev=19.34, samples=20 00:37:49.156 lat (msec) : 10=1.11%, 20=0.28%, 50=98.33%, 100=0.28% 00:37:49.156 cpu : usr=98.78%, sys=0.82%, ctx=13, majf=0, minf=39 00:37:49.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:49.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.156 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.156 filename2: (groupid=0, jobs=1): err= 0: pid=1387732: Mon Jul 15 12:27:37 2024 00:37:49.156 read: IOPS=563, BW=2255KiB/s (2310kB/s)(22.1MiB/10045msec) 00:37:49.156 slat (nsec): min=6311, max=86087, avg=31226.11, stdev=18829.45 00:37:49.156 clat (msec): min=26, max=105, avg=28.04, stdev= 4.21 00:37:49.156 lat (msec): min=26, max=105, avg=28.08, stdev= 4.21 00:37:49.156 clat percentiles (msec): 00:37:49.156 | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:37:49.156 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:37:49.156 | 70.00th=[ 28], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:37:49.156 | 99.00th=[ 30], 99.50th=[ 56], 99.90th=[ 102], 99.95th=[ 102], 00:37:49.156 | 99.99th=[ 106] 00:37:49.156 bw ( KiB/s): min= 2043, max= 2304, per=4.16%, avg=2257.95, stdev=86.38, samples=20 00:37:49.156 iops : min= 510, max= 576, avg=564.30, stdev=21.68, samples=20 00:37:49.156 lat (msec) : 50=99.44%, 100=0.28%, 250=0.28% 00:37:49.156 cpu : usr=98.82%, sys=0.79%, ctx=11, majf=0, minf=41 00:37:49.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:49.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.157 filename2: (groupid=0, jobs=1): err= 0: pid=1387733: Mon Jul 15 12:27:37 2024 00:37:49.157 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.2MiB/10046msec) 00:37:49.157 slat (nsec): min=3417, max=97069, avg=46746.32, stdev=22610.92 00:37:49.157 clat (usec): min=25003, max=90924, avg=27862.09, stdev=3490.75 00:37:49.157 lat (usec): min=25010, max=90984, avg=27908.83, stdev=3491.03 00:37:49.157 clat percentiles (usec): 00:37:49.157 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:37:49.157 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:37:49.157 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:37:49.157 | 99.00th=[29492], 99.50th=[46400], 99.90th=[90702], 99.95th=[90702], 00:37:49.157 | 99.99th=[90702] 00:37:49.157 bw ( KiB/s): min= 2176, max= 2308, per=4.18%, avg=2266.20, stdev=60.15, samples=20 00:37:49.157 iops : min= 544, max= 577, avg=566.55, stdev=15.04, samples=20 00:37:49.157 lat (msec) : 50=99.72%, 100=0.28% 00:37:49.157 cpu : usr=98.76%, sys=0.85%, ctx=17, majf=0, minf=54 00:37:49.157 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:49.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.157 filename2: (groupid=0, jobs=1): err= 0: pid=1387734: Mon Jul 15 12:27:37 2024 00:37:49.157 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.2MiB/10045msec) 00:37:49.157 slat (nsec): min=7402, max=96502, avg=46334.14, stdev=22600.27 00:37:49.157 clat (usec): min=24992, max=90927, avg=27891.71, stdev=3486.46 00:37:49.157 lat (usec): min=25007, max=90962, avg=27938.05, stdev=3486.55 00:37:49.157 clat percentiles (usec): 00:37:49.157 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:37:49.157 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:37:49.157 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:37:49.157 | 99.00th=[29492], 99.50th=[45876], 99.90th=[90702], 99.95th=[90702], 00:37:49.157 | 99.99th=[90702] 00:37:49.157 bw ( KiB/s): min= 2176, max= 2308, per=4.18%, avg=2266.20, stdev=60.15, samples=20 00:37:49.157 iops : min= 544, max= 577, avg=566.55, stdev=15.04, samples=20 00:37:49.157 lat (msec) : 50=99.72%, 100=0.28% 00:37:49.157 cpu : usr=98.57%, sys=1.05%, ctx=23, majf=0, minf=40 00:37:49.157 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:49.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.157 filename2: (groupid=0, jobs=1): err= 0: pid=1387735: Mon Jul 15 12:27:37 2024 00:37:49.157 read: IOPS=579, BW=2317KiB/s (2373kB/s)(22.7MiB/10020msec) 00:37:49.157 slat (nsec): min=6832, max=94769, avg=34858.41, stdev=25146.08 00:37:49.157 clat (usec): min=12011, max=91051, avg=27286.97, stdev=4499.30 00:37:49.157 lat (usec): min=12033, max=91126, avg=27321.83, stdev=4502.94 00:37:49.157 clat percentiles (usec): 00:37:49.157 | 1.00th=[17433], 5.00th=[20841], 10.00th=[24773], 20.00th=[27395], 00:37:49.157 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:37:49.157 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28443], 00:37:49.157 | 99.00th=[37487], 99.50th=[52167], 99.90th=[90702], 99.95th=[90702], 00:37:49.157 | 99.99th=[90702] 00:37:49.157 bw ( KiB/s): min= 2048, max= 2832, per=4.27%, avg=2314.20, stdev=164.41, samples=20 00:37:49.157 iops : min= 512, max= 708, avg=578.40, stdev=41.12, samples=20 00:37:49.157 lat (msec) : 20=4.10%, 50=95.35%, 100=0.55% 00:37:49.157 cpu : usr=98.74%, sys=0.88%, ctx=20, majf=0, minf=50 00:37:49.157 IO depths : 1=4.2%, 2=8.7%, 4=18.8%, 8=58.9%, 16=9.4%, 32=0.0%, >=64=0.0% 00:37:49.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 complete : 0=0.0%, 4=92.7%, 8=2.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 issued rwts: total=5804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.157 filename2: (groupid=0, jobs=1): err= 0: pid=1387736: Mon Jul 15 12:27:37 2024 00:37:49.157 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.1MiB/10040msec) 00:37:49.157 slat (nsec): min=6280, max=89762, avg=14756.52, stdev=12761.16 00:37:49.157 clat (msec): min=19, max=105, avg=28.34, stdev= 4.52 00:37:49.157 lat (msec): min=19, max=105, avg=28.36, stdev= 4.52 00:37:49.157 clat percentiles (msec): 00:37:49.157 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], 00:37:49.157 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 28], 60.00th=[ 28], 00:37:49.157 | 70.00th=[ 28], 80.00th=[ 29], 90.00th=[ 29], 95.00th=[ 29], 00:37:49.157 | 99.00th=[ 32], 99.50th=[ 74], 99.90th=[ 100], 99.95th=[ 106], 00:37:49.157 | 99.99th=[ 106] 00:37:49.157 bw ( KiB/s): min= 2032, max= 2304, per=4.15%, avg=2252.60, stdev=75.84, samples=20 00:37:49.157 iops : min= 508, max= 576, avg=563.00, stdev=18.89, samples=20 00:37:49.157 lat (msec) : 20=0.04%, 50=99.40%, 100=0.50%, 250=0.07% 00:37:49.157 cpu : usr=98.83%, sys=0.77%, ctx=12, majf=0, minf=47 00:37:49.157 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=79.3%, 16=18.2%, 32=0.0%, >=64=0.0% 00:37:49.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 complete : 0=0.0%, 4=89.8%, 8=9.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:49.157 issued rwts: total=5650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:49.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:49.157 00:37:49.157 Run status group 0 (all jobs): 00:37:49.157 READ: bw=52.9MiB/s (55.5MB/s), 2251KiB/s-2317KiB/s (2305kB/s-2373kB/s), io=534MiB (560MB), run=10002-10091msec 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.157 bdev_null0 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:49.157 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.158 [2024-07-15 12:27:37.753932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.158 bdev_null1 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:49.158 { 00:37:49.158 "params": { 00:37:49.158 "name": "Nvme$subsystem", 00:37:49.158 "trtype": "$TEST_TRANSPORT", 00:37:49.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:49.158 "adrfam": "ipv4", 00:37:49.158 "trsvcid": "$NVMF_PORT", 00:37:49.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:49.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:49.158 "hdgst": ${hdgst:-false}, 00:37:49.158 "ddgst": ${ddgst:-false} 00:37:49.158 }, 00:37:49.158 "method": "bdev_nvme_attach_controller" 00:37:49.158 } 00:37:49.158 EOF 00:37:49.158 )") 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:49.158 { 00:37:49.158 "params": { 00:37:49.158 "name": "Nvme$subsystem", 00:37:49.158 "trtype": "$TEST_TRANSPORT", 00:37:49.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:49.158 "adrfam": "ipv4", 00:37:49.158 "trsvcid": "$NVMF_PORT", 00:37:49.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:49.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:49.158 "hdgst": ${hdgst:-false}, 00:37:49.158 "ddgst": ${ddgst:-false} 00:37:49.158 }, 00:37:49.158 "method": "bdev_nvme_attach_controller" 00:37:49.158 } 00:37:49.158 EOF 00:37:49.158 )") 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:49.158 "params": { 00:37:49.158 "name": "Nvme0", 00:37:49.158 "trtype": "tcp", 00:37:49.158 "traddr": "10.0.0.2", 00:37:49.158 "adrfam": "ipv4", 00:37:49.158 "trsvcid": "4420", 00:37:49.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:49.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:49.158 "hdgst": false, 00:37:49.158 "ddgst": false 00:37:49.158 }, 00:37:49.158 "method": "bdev_nvme_attach_controller" 00:37:49.158 },{ 00:37:49.158 "params": { 00:37:49.158 "name": "Nvme1", 00:37:49.158 "trtype": "tcp", 00:37:49.158 "traddr": "10.0.0.2", 00:37:49.158 "adrfam": "ipv4", 00:37:49.158 "trsvcid": "4420", 00:37:49.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:49.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:49.158 "hdgst": false, 00:37:49.158 "ddgst": false 00:37:49.158 }, 00:37:49.158 "method": "bdev_nvme_attach_controller" 00:37:49.158 }' 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:49.158 12:27:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:49.158 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:49.158 ... 00:37:49.158 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:49.158 ... 00:37:49.158 fio-3.35 00:37:49.158 Starting 4 threads 00:37:49.158 EAL: No free 2048 kB hugepages reported on node 1 00:37:54.429 00:37:54.429 filename0: (groupid=0, jobs=1): err= 0: pid=1389677: Mon Jul 15 12:27:43 2024 00:37:54.429 read: IOPS=2809, BW=21.9MiB/s (23.0MB/s)(110MiB/5002msec) 00:37:54.429 slat (usec): min=6, max=220, avg= 9.44, stdev= 3.94 00:37:54.429 clat (usec): min=748, max=5407, avg=2818.35, stdev=481.91 00:37:54.429 lat (usec): min=759, max=5420, avg=2827.79, stdev=482.06 00:37:54.429 clat percentiles (usec): 00:37:54.429 | 1.00th=[ 1811], 5.00th=[ 2089], 10.00th=[ 2278], 20.00th=[ 2474], 00:37:54.429 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2933], 00:37:54.429 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 3621], 00:37:54.429 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5014], 99.95th=[ 5145], 00:37:54.429 | 99.99th=[ 5407] 00:37:54.429 bw ( KiB/s): min=21536, max=24432, per=26.80%, avg=22446.22, stdev=925.29, samples=9 00:37:54.429 iops : min= 2692, max= 3054, avg=2805.78, stdev=115.66, samples=9 00:37:54.429 lat (usec) : 750=0.01%, 1000=0.04% 00:37:54.429 lat (msec) : 2=2.88%, 4=94.19%, 10=2.89% 00:37:54.429 cpu : usr=95.92%, sys=3.72%, ctx=9, majf=0, minf=9 00:37:54.429 IO depths : 1=0.2%, 2=8.6%, 4=62.8%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.429 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.429 issued rwts: total=14053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:54.429 filename0: (groupid=0, jobs=1): err= 0: pid=1389678: Mon Jul 15 12:27:43 2024 00:37:54.429 read: IOPS=2536, BW=19.8MiB/s (20.8MB/s)(99.1MiB/5002msec) 00:37:54.429 slat (usec): min=6, max=118, avg= 9.62, stdev= 3.61 00:37:54.429 clat (usec): min=771, max=6384, avg=3124.97, stdev=583.59 00:37:54.429 lat (usec): min=783, max=6398, avg=3134.58, stdev=583.46 00:37:54.429 clat percentiles (usec): 00:37:54.429 | 1.00th=[ 1975], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:37:54.429 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3064], 00:37:54.429 | 70.00th=[ 3195], 80.00th=[ 3425], 90.00th=[ 3949], 95.00th=[ 4424], 00:37:54.429 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5473], 99.95th=[ 5604], 00:37:54.429 | 99.99th=[ 5669] 00:37:54.429 bw ( KiB/s): min=19456, max=21136, per=24.20%, avg=20270.22, stdev=546.90, samples=9 00:37:54.429 iops : min= 2432, max= 2642, avg=2533.78, stdev=68.36, samples=9 00:37:54.429 lat (usec) : 1000=0.02% 00:37:54.429 lat (msec) : 2=1.05%, 4=89.39%, 10=9.54% 00:37:54.429 cpu : usr=96.30%, sys=3.34%, ctx=13, majf=0, minf=9 00:37:54.429 IO depths : 1=0.2%, 2=6.0%, 4=65.3%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.429 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.429 issued rwts: total=12688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:54.429 filename1: (groupid=0, jobs=1): err= 0: pid=1389679: Mon Jul 15 12:27:43 2024 00:37:54.429 read: IOPS=2538, BW=19.8MiB/s (20.8MB/s)(99.2MiB/5003msec) 00:37:54.429 slat (usec): min=6, max=213, avg= 9.53, stdev= 3.96 00:37:54.429 clat (usec): min=936, max=5627, avg=3123.28, stdev=616.10 00:37:54.429 lat (usec): min=948, max=5633, avg=3132.81, stdev=615.95 00:37:54.429 clat percentiles (usec): 00:37:54.429 | 1.00th=[ 1975], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2737], 00:37:54.429 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3064], 00:37:54.429 | 70.00th=[ 3163], 80.00th=[ 3392], 90.00th=[ 4015], 95.00th=[ 4490], 00:37:54.429 | 99.00th=[ 5145], 99.50th=[ 5342], 99.90th=[ 5538], 99.95th=[ 5604], 00:37:54.429 | 99.99th=[ 5604] 00:37:54.429 bw ( KiB/s): min=19184, max=21744, per=24.27%, avg=20327.11, stdev=841.11, samples=9 00:37:54.429 iops : min= 2398, max= 2718, avg=2540.89, stdev=105.14, samples=9 00:37:54.429 lat (usec) : 1000=0.01% 00:37:54.429 lat (msec) : 2=1.10%, 4=88.48%, 10=10.41% 00:37:54.429 cpu : usr=96.32%, sys=3.32%, ctx=13, majf=0, minf=9 00:37:54.429 IO depths : 1=0.6%, 2=4.6%, 4=67.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.429 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.429 issued rwts: total=12698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:54.429 filename1: (groupid=0, jobs=1): err= 0: pid=1389680: Mon Jul 15 12:27:43 2024 00:37:54.429 read: IOPS=2587, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:37:54.429 slat (usec): min=6, max=114, avg= 9.51, stdev= 3.65 00:37:54.429 clat (usec): min=769, max=6839, avg=3064.19, stdev=549.24 00:37:54.429 lat (usec): min=776, max=6864, avg=3073.70, stdev=549.24 00:37:54.429 clat percentiles (usec): 00:37:54.429 | 1.00th=[ 1991], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2704], 00:37:54.429 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:37:54.429 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3752], 95.00th=[ 4228], 00:37:54.429 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 5538], 00:37:54.429 | 99.99th=[ 6783] 00:37:54.429 bw ( KiB/s): min=19856, max=21248, per=24.71%, avg=20694.33, stdev=411.24, samples=9 00:37:54.429 iops : min= 2482, max= 2656, avg=2586.78, stdev=51.40, samples=9 00:37:54.429 lat (usec) : 1000=0.04% 00:37:54.429 lat (msec) : 2=1.06%, 4=91.96%, 10=6.94% 00:37:54.429 cpu : usr=96.66%, sys=3.02%, ctx=8, majf=0, minf=9 00:37:54.429 IO depths : 1=0.2%, 2=5.1%, 4=65.9%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.429 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.429 issued rwts: total=12939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:54.429 00:37:54.429 Run status group 0 (all jobs): 00:37:54.429 READ: bw=81.8MiB/s (85.8MB/s), 19.8MiB/s-21.9MiB/s (20.8MB/s-23.0MB/s), io=409MiB (429MB), run=5001-5003msec 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.429 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.430 00:37:54.430 real 0m24.319s 00:37:54.430 user 4m52.874s 00:37:54.430 sys 0m4.458s 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:54.430 12:27:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.430 ************************************ 00:37:54.430 END TEST fio_dif_rand_params 00:37:54.430 ************************************ 00:37:54.430 12:27:44 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:37:54.430 12:27:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:54.430 12:27:44 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:54.430 12:27:44 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:54.430 12:27:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:54.430 ************************************ 00:37:54.430 START TEST fio_dif_digest 00:37:54.430 ************************************ 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:54.430 bdev_null0 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:54.430 [2024-07-15 12:27:44.189334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:54.430 { 00:37:54.430 "params": { 00:37:54.430 "name": "Nvme$subsystem", 00:37:54.430 "trtype": "$TEST_TRANSPORT", 00:37:54.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:54.430 "adrfam": "ipv4", 00:37:54.430 "trsvcid": "$NVMF_PORT", 00:37:54.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:54.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:54.430 "hdgst": ${hdgst:-false}, 00:37:54.430 "ddgst": ${ddgst:-false} 00:37:54.430 }, 00:37:54.430 "method": "bdev_nvme_attach_controller" 00:37:54.430 } 00:37:54.430 EOF 00:37:54.430 )") 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:54.430 12:27:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:54.431 "params": { 00:37:54.431 "name": "Nvme0", 00:37:54.431 "trtype": "tcp", 00:37:54.431 "traddr": "10.0.0.2", 00:37:54.431 "adrfam": "ipv4", 00:37:54.431 "trsvcid": "4420", 00:37:54.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:54.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:54.431 "hdgst": true, 00:37:54.431 "ddgst": true 00:37:54.431 }, 00:37:54.431 "method": "bdev_nvme_attach_controller" 00:37:54.431 }' 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:54.431 12:27:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.690 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:54.690 ... 00:37:54.690 fio-3.35 00:37:54.690 Starting 3 threads 00:37:54.690 EAL: No free 2048 kB hugepages reported on node 1 00:38:06.895 00:38:06.895 filename0: (groupid=0, jobs=1): err= 0: pid=1390745: Mon Jul 15 12:27:55 2024 00:38:06.895 read: IOPS=275, BW=34.5MiB/s (36.1MB/s)(346MiB/10046msec) 00:38:06.895 slat (nsec): min=6489, max=51909, avg=11994.08, stdev=2234.00 00:38:06.895 clat (usec): min=7935, max=50838, avg=10849.75, stdev=1301.42 00:38:06.895 lat (usec): min=7948, max=50849, avg=10861.75, stdev=1301.38 00:38:06.895 clat percentiles (usec): 00:38:06.895 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:38:06.895 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:38:06.895 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:38:06.895 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13698], 99.95th=[48497], 00:38:06.895 | 99.99th=[50594] 00:38:06.895 bw ( KiB/s): min=33792, max=36608, per=33.72%, avg=35430.40, stdev=823.90, samples=20 00:38:06.895 iops : min= 264, max= 286, avg=276.80, stdev= 6.44, samples=20 00:38:06.895 lat (msec) : 10=13.86%, 20=86.06%, 50=0.04%, 100=0.04% 00:38:06.895 cpu : usr=94.58%, sys=5.11%, ctx=29, majf=0, minf=161 00:38:06.895 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.895 issued rwts: total=2770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.895 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:06.895 filename0: (groupid=0, jobs=1): err= 0: pid=1390746: Mon Jul 15 12:27:55 2024 00:38:06.895 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(336MiB/10045msec) 00:38:06.895 slat (nsec): min=6490, max=24505, avg=11818.16, stdev=1993.89 00:38:06.895 clat (usec): min=8504, max=48743, avg=11197.17, stdev=1265.32 00:38:06.895 lat (usec): min=8517, max=48754, avg=11208.99, stdev=1265.34 00:38:06.895 clat percentiles (usec): 00:38:06.895 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:38:06.895 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:38:06.895 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12518], 00:38:06.895 | 99.00th=[13042], 99.50th=[13566], 99.90th=[14222], 99.95th=[45351], 00:38:06.895 | 99.99th=[48497] 00:38:06.895 bw ( KiB/s): min=33280, max=35840, per=32.67%, avg=34329.60, stdev=765.30, samples=20 00:38:06.895 iops : min= 260, max= 280, avg=268.20, stdev= 5.98, samples=20 00:38:06.895 lat (msec) : 10=7.49%, 20=92.44%, 50=0.07% 00:38:06.895 cpu : usr=94.52%, sys=5.16%, ctx=31, majf=0, minf=111 00:38:06.895 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.895 issued rwts: total=2684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.895 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:06.895 filename0: (groupid=0, jobs=1): err= 0: pid=1390747: Mon Jul 15 12:27:55 2024 00:38:06.895 read: IOPS=278, BW=34.8MiB/s (36.4MB/s)(349MiB/10043msec) 00:38:06.895 slat (nsec): min=6473, max=26548, avg=11792.50, stdev=2044.11 00:38:06.895 clat (usec): min=8123, max=46157, avg=10762.32, stdev=1279.62 00:38:06.895 lat (usec): min=8136, max=46170, avg=10774.11, stdev=1279.58 00:38:06.895 clat percentiles (usec): 00:38:06.895 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:38:06.895 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:38:06.895 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:38:06.895 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13960], 99.95th=[45351], 00:38:06.895 | 99.99th=[46400] 00:38:06.895 bw ( KiB/s): min=33280, max=37888, per=33.99%, avg=35712.00, stdev=1378.60, samples=20 00:38:06.895 iops : min= 260, max= 296, avg=279.00, stdev=10.77, samples=20 00:38:06.895 lat (msec) : 10=19.84%, 20=80.09%, 50=0.07% 00:38:06.895 cpu : usr=94.94%, sys=4.75%, ctx=26, majf=0, minf=176 00:38:06.895 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.895 issued rwts: total=2792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.895 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:06.895 00:38:06.895 Run status group 0 (all jobs): 00:38:06.895 READ: bw=103MiB/s (108MB/s), 33.4MiB/s-34.8MiB/s (35.0MB/s-36.4MB/s), io=1031MiB (1081MB), run=10043-10046msec 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:06.895 00:38:06.895 real 0m11.245s 00:38:06.895 user 0m35.217s 00:38:06.895 sys 0m1.842s 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:06.895 12:27:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:06.895 ************************************ 00:38:06.895 END TEST fio_dif_digest 00:38:06.895 ************************************ 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:06.895 12:27:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:06.895 12:27:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:06.895 rmmod nvme_tcp 00:38:06.895 rmmod nvme_fabrics 00:38:06.895 rmmod nvme_keyring 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1381845 ']' 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1381845 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1381845 ']' 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1381845 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1381845 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1381845' 00:38:06.895 killing process with pid 1381845 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1381845 00:38:06.895 12:27:55 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1381845 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:06.895 12:27:55 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:08.801 Waiting for block devices as requested 00:38:08.801 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:08.801 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:08.801 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:08.801 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:08.801 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:08.801 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:09.074 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:09.074 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:09.074 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:09.389 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:09.389 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:09.389 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:09.389 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:09.648 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:09.648 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:09.648 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:09.648 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:09.906 12:27:59 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:09.906 12:27:59 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:09.906 12:27:59 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:09.906 12:27:59 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:09.906 12:27:59 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.906 12:27:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:09.906 12:27:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:11.811 12:28:01 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:11.811 00:38:11.811 real 1m13.790s 00:38:11.811 user 7m10.739s 00:38:11.811 sys 0m19.216s 00:38:11.811 12:28:01 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:11.811 12:28:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:11.811 ************************************ 00:38:11.811 END TEST nvmf_dif 00:38:11.811 ************************************ 00:38:12.069 12:28:01 -- common/autotest_common.sh@1142 -- # return 0 00:38:12.069 12:28:01 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:12.069 12:28:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:12.069 12:28:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:12.069 12:28:01 -- common/autotest_common.sh@10 -- # set +x 00:38:12.069 ************************************ 00:38:12.069 START TEST nvmf_abort_qd_sizes 00:38:12.069 ************************************ 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:12.069 * Looking for test storage... 00:38:12.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:12.069 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:38:12.070 12:28:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:18.635 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:18.635 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:18.635 Found net devices under 0000:86:00.0: cvl_0_0 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:18.635 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:18.636 Found net devices under 0000:86:00.1: cvl_0_1 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:18.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:18.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:38:18.636 00:38:18.636 --- 10.0.0.2 ping statistics --- 00:38:18.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:18.636 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:18.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:18.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:38:18.636 00:38:18.636 --- 10.0.0.1 ping statistics --- 00:38:18.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:18.636 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:18.636 12:28:07 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:20.540 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:20.540 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:21.473 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1398521 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1398521 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1398521 ']' 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:21.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:21.473 12:28:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:21.731 [2024-07-15 12:28:11.488032] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:38:21.731 [2024-07-15 12:28:11.488077] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:21.731 EAL: No free 2048 kB hugepages reported on node 1 00:38:21.731 [2024-07-15 12:28:11.560194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:21.731 [2024-07-15 12:28:11.603314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:21.731 [2024-07-15 12:28:11.603356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:21.731 [2024-07-15 12:28:11.603363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:21.731 [2024-07-15 12:28:11.603368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:21.731 [2024-07-15 12:28:11.603374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:21.731 [2024-07-15 12:28:11.603433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:21.731 [2024-07-15 12:28:11.603541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:21.731 [2024-07-15 12:28:11.603654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:38:21.731 [2024-07-15 12:28:11.603652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:22.296 12:28:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:22.296 12:28:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:38:22.296 12:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:22.296 12:28:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:22.296 12:28:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:22.552 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:22.553 12:28:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:22.553 ************************************ 00:38:22.553 START TEST spdk_target_abort 00:38:22.553 ************************************ 00:38:22.553 12:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:38:22.553 12:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:22.553 12:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:38:22.553 12:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:22.553 12:28:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:25.840 spdk_targetn1 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:25.840 [2024-07-15 12:28:15.204443] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:25.840 [2024-07-15 12:28:15.237329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:25.840 12:28:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:25.840 EAL: No free 2048 kB hugepages reported on node 1 00:38:29.118 Initializing NVMe Controllers 00:38:29.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:29.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:29.118 Initialization complete. Launching workers. 00:38:29.118 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15386, failed: 0 00:38:29.118 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1441, failed to submit 13945 00:38:29.118 success 717, unsuccess 724, failed 0 00:38:29.118 12:28:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:29.118 12:28:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:29.118 EAL: No free 2048 kB hugepages reported on node 1 00:38:32.399 Initializing NVMe Controllers 00:38:32.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:32.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:32.399 Initialization complete. Launching workers. 00:38:32.399 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8542, failed: 0 00:38:32.399 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 7283 00:38:32.399 success 310, unsuccess 949, failed 0 00:38:32.399 12:28:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:32.399 12:28:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:32.399 EAL: No free 2048 kB hugepages reported on node 1 00:38:35.717 Initializing NVMe Controllers 00:38:35.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:35.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:35.717 Initialization complete. Launching workers. 00:38:35.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38289, failed: 0 00:38:35.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2765, failed to submit 35524 00:38:35.717 success 590, unsuccess 2175, failed 0 00:38:35.717 12:28:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:35.717 12:28:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.717 12:28:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:35.717 12:28:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.717 12:28:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:35.717 12:28:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.717 12:28:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.285 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:36.285 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1398521 00:38:36.285 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1398521 ']' 00:38:36.285 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1398521 00:38:36.285 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:38:36.285 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:36.285 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1398521 00:38:36.544 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:36.544 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:36.544 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1398521' 00:38:36.544 killing process with pid 1398521 00:38:36.544 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1398521 00:38:36.544 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1398521 00:38:36.544 00:38:36.544 real 0m14.119s 00:38:36.544 user 0m56.393s 00:38:36.544 sys 0m2.264s 00:38:36.544 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:36.544 12:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.544 ************************************ 00:38:36.544 END TEST spdk_target_abort 00:38:36.544 ************************************ 00:38:36.544 12:28:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:38:36.544 12:28:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:36.544 12:28:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:36.544 12:28:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:36.544 12:28:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:36.803 ************************************ 00:38:36.803 START TEST kernel_target_abort 00:38:36.803 ************************************ 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:36.803 12:28:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:39.332 Waiting for block devices as requested 00:38:39.332 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:39.590 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:39.590 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:39.590 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:39.590 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:39.848 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:39.848 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:39.848 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:40.107 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:40.107 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:40.107 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:40.107 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:40.365 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:40.365 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:40.365 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:40.623 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:40.623 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:40.623 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:40.623 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:40.623 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:40.623 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:38:40.623 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:40.623 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:38:40.623 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:40.623 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:40.623 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:40.894 No valid GPT data, bailing 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:38:40.894 00:38:40.894 Discovery Log Number of Records 2, Generation counter 2 00:38:40.894 =====Discovery Log Entry 0====== 00:38:40.894 trtype: tcp 00:38:40.894 adrfam: ipv4 00:38:40.894 subtype: current discovery subsystem 00:38:40.894 treq: not specified, sq flow control disable supported 00:38:40.894 portid: 1 00:38:40.894 trsvcid: 4420 00:38:40.894 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:40.894 traddr: 10.0.0.1 00:38:40.894 eflags: none 00:38:40.894 sectype: none 00:38:40.894 =====Discovery Log Entry 1====== 00:38:40.894 trtype: tcp 00:38:40.894 adrfam: ipv4 00:38:40.894 subtype: nvme subsystem 00:38:40.894 treq: not specified, sq flow control disable supported 00:38:40.894 portid: 1 00:38:40.894 trsvcid: 4420 00:38:40.894 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:40.894 traddr: 10.0.0.1 00:38:40.894 eflags: none 00:38:40.894 sectype: none 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:40.894 12:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:40.894 EAL: No free 2048 kB hugepages reported on node 1 00:38:44.178 Initializing NVMe Controllers 00:38:44.178 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:44.178 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:44.178 Initialization complete. Launching workers. 00:38:44.178 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86473, failed: 0 00:38:44.178 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 86473, failed to submit 0 00:38:44.178 success 0, unsuccess 86473, failed 0 00:38:44.178 12:28:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:44.178 12:28:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:44.178 EAL: No free 2048 kB hugepages reported on node 1 00:38:47.643 Initializing NVMe Controllers 00:38:47.643 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:47.643 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:47.643 Initialization complete. Launching workers. 00:38:47.643 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139454, failed: 0 00:38:47.643 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35050, failed to submit 104404 00:38:47.643 success 0, unsuccess 35050, failed 0 00:38:47.643 12:28:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:47.644 12:28:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:47.644 EAL: No free 2048 kB hugepages reported on node 1 00:38:50.174 Initializing NVMe Controllers 00:38:50.174 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:50.174 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:50.174 Initialization complete. Launching workers. 00:38:50.174 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133787, failed: 0 00:38:50.174 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33478, failed to submit 100309 00:38:50.174 success 0, unsuccess 33478, failed 0 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:50.174 12:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:53.461 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:53.461 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:54.027 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:54.027 00:38:54.027 real 0m17.327s 00:38:54.027 user 0m8.628s 00:38:54.027 sys 0m4.995s 00:38:54.027 12:28:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:54.027 12:28:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:54.027 ************************************ 00:38:54.027 END TEST kernel_target_abort 00:38:54.027 ************************************ 00:38:54.027 12:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:38:54.027 12:28:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:54.027 12:28:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:54.027 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:54.028 rmmod nvme_tcp 00:38:54.028 rmmod nvme_fabrics 00:38:54.028 rmmod nvme_keyring 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1398521 ']' 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1398521 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1398521 ']' 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1398521 00:38:54.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1398521) - No such process 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1398521 is not found' 00:38:54.028 Process with pid 1398521 is not found 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:54.028 12:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:56.583 Waiting for block devices as requested 00:38:56.842 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:56.842 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:56.842 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:57.101 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:57.101 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:57.101 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:57.359 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:57.359 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:57.359 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:57.618 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:57.618 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:57.618 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:57.618 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:57.877 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:57.877 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:57.877 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:58.136 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:58.136 12:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:58.136 12:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:58.136 12:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:58.136 12:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:58.136 12:28:47 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.136 12:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:58.136 12:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:00.062 12:28:50 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:00.062 00:39:00.062 real 0m48.202s 00:39:00.062 user 1m9.290s 00:39:00.062 sys 0m15.707s 00:39:00.062 12:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:00.062 12:28:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:00.062 ************************************ 00:39:00.062 END TEST nvmf_abort_qd_sizes 00:39:00.062 ************************************ 00:39:00.319 12:28:50 -- common/autotest_common.sh@1142 -- # return 0 00:39:00.319 12:28:50 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:00.319 12:28:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:00.319 12:28:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:00.319 12:28:50 -- common/autotest_common.sh@10 -- # set +x 00:39:00.319 ************************************ 00:39:00.319 START TEST keyring_file 00:39:00.319 ************************************ 00:39:00.319 12:28:50 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:00.319 * Looking for test storage... 00:39:00.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:00.319 12:28:50 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:00.319 12:28:50 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.319 12:28:50 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.319 12:28:50 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.319 12:28:50 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.319 12:28:50 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.319 12:28:50 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.319 12:28:50 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.319 12:28:50 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.319 12:28:50 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:00.320 12:28:50 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@47 -- # : 0 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:00.320 12:28:50 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:00.320 12:28:50 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:00.320 12:28:50 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:00.320 12:28:50 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:00.320 12:28:50 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:00.320 12:28:50 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4wHCibwWYR 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4wHCibwWYR 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4wHCibwWYR 00:39:00.320 12:28:50 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.4wHCibwWYR 00:39:00.320 12:28:50 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QidEFZwG3W 00:39:00.320 12:28:50 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:00.320 12:28:50 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:00.578 12:28:50 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QidEFZwG3W 00:39:00.578 12:28:50 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QidEFZwG3W 00:39:00.578 12:28:50 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QidEFZwG3W 00:39:00.578 12:28:50 keyring_file -- keyring/file.sh@30 -- # tgtpid=1407289 00:39:00.578 12:28:50 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:00.578 12:28:50 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1407289 00:39:00.578 12:28:50 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1407289 ']' 00:39:00.578 12:28:50 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:00.578 12:28:50 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:00.578 12:28:50 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:00.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:00.578 12:28:50 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:00.578 12:28:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.578 [2024-07-15 12:28:50.392922] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:39:00.578 [2024-07-15 12:28:50.392972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407289 ] 00:39:00.578 EAL: No free 2048 kB hugepages reported on node 1 00:39:00.578 [2024-07-15 12:28:50.443717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.578 [2024-07-15 12:28:50.484540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:00.836 12:28:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.836 [2024-07-15 12:28:50.683206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:00.836 null0 00:39:00.836 [2024-07-15 12:28:50.715255] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:00.836 [2024-07-15 12:28:50.715565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:00.836 [2024-07-15 12:28:50.723272] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:00.836 12:28:50 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.836 [2024-07-15 12:28:50.735297] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:00.836 request: 00:39:00.836 { 00:39:00.836 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:00.836 "secure_channel": false, 00:39:00.836 "listen_address": { 00:39:00.836 "trtype": "tcp", 00:39:00.836 "traddr": "127.0.0.1", 00:39:00.836 "trsvcid": "4420" 00:39:00.836 }, 00:39:00.836 "method": "nvmf_subsystem_add_listener", 00:39:00.836 "req_id": 1 00:39:00.836 } 00:39:00.836 Got JSON-RPC error response 00:39:00.836 response: 00:39:00.836 { 00:39:00.836 "code": -32602, 00:39:00.836 "message": "Invalid parameters" 00:39:00.836 } 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:00.836 12:28:50 keyring_file -- keyring/file.sh@46 -- # bperfpid=1407293 00:39:00.836 12:28:50 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1407293 /var/tmp/bperf.sock 00:39:00.836 12:28:50 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1407293 ']' 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:00.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:00.836 12:28:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:00.836 [2024-07-15 12:28:50.786836] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:39:00.836 [2024-07-15 12:28:50.786875] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407293 ] 00:39:00.836 EAL: No free 2048 kB hugepages reported on node 1 00:39:01.094 [2024-07-15 12:28:50.851735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.094 [2024-07-15 12:28:50.892256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.094 12:28:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:01.095 12:28:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:01.095 12:28:50 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4wHCibwWYR 00:39:01.095 12:28:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4wHCibwWYR 00:39:01.353 12:28:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QidEFZwG3W 00:39:01.353 12:28:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QidEFZwG3W 00:39:01.353 12:28:51 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:39:01.353 12:28:51 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:39:01.353 12:28:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.353 12:28:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:01.353 12:28:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.610 12:28:51 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.4wHCibwWYR == \/\t\m\p\/\t\m\p\.\4\w\H\C\i\b\w\W\Y\R ]] 00:39:01.610 12:28:51 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:39:01.610 12:28:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:01.610 12:28:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.610 12:28:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:01.610 12:28:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.868 12:28:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.QidEFZwG3W == \/\t\m\p\/\t\m\p\.\Q\i\d\E\F\Z\w\G\3\W ]] 00:39:01.869 12:28:51 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.869 12:28:51 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:39:01.869 12:28:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.869 12:28:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:02.127 12:28:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:02.127 12:28:52 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:02.127 12:28:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:02.385 [2024-07-15 12:28:52.210865] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:02.385 nvme0n1 00:39:02.385 12:28:52 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:39:02.385 12:28:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:02.385 12:28:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.385 12:28:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.385 12:28:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:02.385 12:28:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.644 12:28:52 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:39:02.644 12:28:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:39:02.644 12:28:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:02.644 12:28:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:02.644 12:28:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:02.644 12:28:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:02.644 12:28:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:02.911 12:28:52 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:39:02.911 12:28:52 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:02.911 Running I/O for 1 seconds... 00:39:03.844 00:39:03.844 Latency(us) 00:39:03.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.844 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:03.844 nvme0n1 : 1.00 14534.81 56.78 0.00 0.00 8785.13 3632.97 88445.11 00:39:03.844 =================================================================================================================== 00:39:03.844 Total : 14534.81 56.78 0.00 0.00 8785.13 3632.97 88445.11 00:39:03.844 0 00:39:03.844 12:28:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:03.844 12:28:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:04.102 12:28:53 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:39:04.102 12:28:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:04.102 12:28:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.102 12:28:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.102 12:28:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:04.102 12:28:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.362 12:28:54 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:39:04.362 12:28:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:39:04.362 12:28:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:04.362 12:28:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.362 12:28:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.362 12:28:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.362 12:28:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:04.362 12:28:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:04.362 12:28:54 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:04.362 12:28:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:04.362 12:28:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:04.362 12:28:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:04.362 12:28:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:04.362 12:28:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:04.362 12:28:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:04.362 12:28:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:04.362 12:28:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:04.621 [2024-07-15 12:28:54.487589] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:04.621 [2024-07-15 12:28:54.487847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c2cd0 (107): Transport endpoint is not connected 00:39:04.621 [2024-07-15 12:28:54.488841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c2cd0 (9): Bad file descriptor 00:39:04.621 [2024-07-15 12:28:54.489842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:04.621 [2024-07-15 12:28:54.489854] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:04.621 [2024-07-15 12:28:54.489861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:04.621 request: 00:39:04.621 { 00:39:04.621 "name": "nvme0", 00:39:04.621 "trtype": "tcp", 00:39:04.621 "traddr": "127.0.0.1", 00:39:04.621 "adrfam": "ipv4", 00:39:04.621 "trsvcid": "4420", 00:39:04.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:04.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:04.621 "prchk_reftag": false, 00:39:04.621 "prchk_guard": false, 00:39:04.621 "hdgst": false, 00:39:04.621 "ddgst": false, 00:39:04.621 "psk": "key1", 00:39:04.621 "method": "bdev_nvme_attach_controller", 00:39:04.621 "req_id": 1 00:39:04.621 } 00:39:04.621 Got JSON-RPC error response 00:39:04.621 response: 00:39:04.621 { 00:39:04.621 "code": -5, 00:39:04.621 "message": "Input/output error" 00:39:04.621 } 00:39:04.621 12:28:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:04.621 12:28:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:04.621 12:28:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:04.621 12:28:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:04.621 12:28:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:39:04.621 12:28:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:04.621 12:28:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.621 12:28:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.621 12:28:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:04.621 12:28:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.880 12:28:54 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:39:04.880 12:28:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:39:04.880 12:28:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:04.880 12:28:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:04.880 12:28:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:04.880 12:28:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:04.880 12:28:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:05.138 12:28:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:05.138 12:28:54 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:39:05.138 12:28:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:05.138 12:28:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:39:05.138 12:28:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:05.396 12:28:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:39:05.396 12:28:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:39:05.396 12:28:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.654 12:28:55 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:39:05.654 12:28:55 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.4wHCibwWYR 00:39:05.654 12:28:55 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.4wHCibwWYR 00:39:05.654 12:28:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:05.654 12:28:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.4wHCibwWYR 00:39:05.654 12:28:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:05.654 12:28:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:05.654 12:28:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:05.654 12:28:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:05.654 12:28:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4wHCibwWYR 00:39:05.654 12:28:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4wHCibwWYR 00:39:05.655 [2024-07-15 12:28:55.580022] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4wHCibwWYR': 0100660 00:39:05.655 [2024-07-15 12:28:55.580045] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:05.655 request: 00:39:05.655 { 00:39:05.655 "name": "key0", 00:39:05.655 "path": "/tmp/tmp.4wHCibwWYR", 00:39:05.655 "method": "keyring_file_add_key", 00:39:05.655 "req_id": 1 00:39:05.655 } 00:39:05.655 Got JSON-RPC error response 00:39:05.655 response: 00:39:05.655 { 00:39:05.655 "code": -1, 00:39:05.655 "message": "Operation not permitted" 00:39:05.655 } 00:39:05.655 12:28:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:05.655 12:28:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:05.655 12:28:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:05.655 12:28:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:05.655 12:28:55 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.4wHCibwWYR 00:39:05.655 12:28:55 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4wHCibwWYR 00:39:05.655 12:28:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4wHCibwWYR 00:39:05.912 12:28:55 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.4wHCibwWYR 00:39:05.912 12:28:55 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:39:05.912 12:28:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:05.912 12:28:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:05.912 12:28:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:05.912 12:28:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:05.912 12:28:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:06.169 12:28:55 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:39:06.169 12:28:55 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.169 12:28:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:39:06.169 12:28:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.169 12:28:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:06.169 12:28:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:06.169 12:28:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:06.169 12:28:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:06.169 12:28:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.169 12:28:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.169 [2024-07-15 12:28:56.105429] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.4wHCibwWYR': No such file or directory 00:39:06.169 [2024-07-15 12:28:56.105449] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:06.169 [2024-07-15 12:28:56.105469] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:06.169 [2024-07-15 12:28:56.105475] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:06.169 [2024-07-15 12:28:56.105480] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:06.169 request: 00:39:06.169 { 00:39:06.169 "name": "nvme0", 00:39:06.169 "trtype": "tcp", 00:39:06.169 "traddr": "127.0.0.1", 00:39:06.169 "adrfam": "ipv4", 00:39:06.169 "trsvcid": "4420", 00:39:06.169 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:06.169 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:06.169 "prchk_reftag": false, 00:39:06.169 "prchk_guard": false, 00:39:06.169 "hdgst": false, 00:39:06.169 "ddgst": false, 00:39:06.169 "psk": "key0", 00:39:06.169 "method": "bdev_nvme_attach_controller", 00:39:06.169 "req_id": 1 00:39:06.169 } 00:39:06.169 Got JSON-RPC error response 00:39:06.169 response: 00:39:06.169 { 00:39:06.169 "code": -19, 00:39:06.169 "message": "No such device" 00:39:06.169 } 00:39:06.169 12:28:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:39:06.169 12:28:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:06.169 12:28:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:06.169 12:28:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:06.169 12:28:56 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:39:06.169 12:28:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:06.427 12:28:56 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dSeEn8IFnx 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:06.427 12:28:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:06.427 12:28:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:06.427 12:28:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:06.427 12:28:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:06.427 12:28:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:06.427 12:28:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dSeEn8IFnx 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dSeEn8IFnx 00:39:06.427 12:28:56 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.dSeEn8IFnx 00:39:06.427 12:28:56 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dSeEn8IFnx 00:39:06.427 12:28:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dSeEn8IFnx 00:39:06.685 12:28:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.685 12:28:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:06.944 nvme0n1 00:39:06.944 12:28:56 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:39:06.944 12:28:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:06.944 12:28:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:06.944 12:28:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:06.944 12:28:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:06.944 12:28:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.202 12:28:56 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:39:07.202 12:28:56 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:39:07.202 12:28:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:07.202 12:28:57 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:39:07.202 12:28:57 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:39:07.202 12:28:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.202 12:28:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.202 12:28:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.461 12:28:57 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:39:07.461 12:28:57 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:39:07.461 12:28:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:07.461 12:28:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:07.461 12:28:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:07.461 12:28:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:07.461 12:28:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.719 12:28:57 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:39:07.719 12:28:57 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:07.719 12:28:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:07.719 12:28:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:39:07.719 12:28:57 keyring_file -- keyring/file.sh@104 -- # jq length 00:39:07.719 12:28:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:07.978 12:28:57 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:39:07.978 12:28:57 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dSeEn8IFnx 00:39:07.978 12:28:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dSeEn8IFnx 00:39:08.235 12:28:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QidEFZwG3W 00:39:08.235 12:28:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QidEFZwG3W 00:39:08.235 12:28:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:08.235 12:28:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:08.494 nvme0n1 00:39:08.494 12:28:58 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:39:08.494 12:28:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:08.752 12:28:58 keyring_file -- keyring/file.sh@112 -- # config='{ 00:39:08.752 "subsystems": [ 00:39:08.752 { 00:39:08.752 "subsystem": "keyring", 00:39:08.752 "config": [ 00:39:08.752 { 00:39:08.752 "method": "keyring_file_add_key", 00:39:08.752 "params": { 00:39:08.752 "name": "key0", 00:39:08.752 "path": "/tmp/tmp.dSeEn8IFnx" 00:39:08.752 } 00:39:08.752 }, 00:39:08.752 { 00:39:08.752 "method": "keyring_file_add_key", 00:39:08.752 "params": { 00:39:08.752 "name": "key1", 00:39:08.752 "path": "/tmp/tmp.QidEFZwG3W" 00:39:08.752 } 00:39:08.752 } 00:39:08.752 ] 00:39:08.752 }, 00:39:08.752 { 00:39:08.752 "subsystem": "iobuf", 00:39:08.752 "config": [ 00:39:08.752 { 00:39:08.752 "method": "iobuf_set_options", 00:39:08.752 "params": { 00:39:08.752 "small_pool_count": 8192, 00:39:08.752 "large_pool_count": 1024, 00:39:08.752 "small_bufsize": 8192, 00:39:08.752 "large_bufsize": 135168 00:39:08.752 } 00:39:08.752 } 00:39:08.752 ] 00:39:08.752 }, 00:39:08.752 { 00:39:08.752 "subsystem": "sock", 00:39:08.752 "config": [ 00:39:08.752 { 00:39:08.752 "method": "sock_set_default_impl", 00:39:08.752 "params": { 00:39:08.752 "impl_name": "posix" 00:39:08.752 } 00:39:08.752 }, 00:39:08.752 { 00:39:08.752 "method": "sock_impl_set_options", 00:39:08.752 "params": { 00:39:08.752 "impl_name": "ssl", 00:39:08.752 "recv_buf_size": 4096, 00:39:08.752 "send_buf_size": 4096, 00:39:08.752 "enable_recv_pipe": true, 00:39:08.752 "enable_quickack": false, 00:39:08.752 "enable_placement_id": 0, 00:39:08.752 "enable_zerocopy_send_server": true, 00:39:08.752 "enable_zerocopy_send_client": false, 00:39:08.752 "zerocopy_threshold": 0, 00:39:08.752 "tls_version": 0, 00:39:08.752 "enable_ktls": false 00:39:08.752 } 00:39:08.752 }, 00:39:08.752 { 00:39:08.753 "method": "sock_impl_set_options", 00:39:08.753 "params": { 00:39:08.753 "impl_name": "posix", 00:39:08.753 "recv_buf_size": 2097152, 00:39:08.753 "send_buf_size": 2097152, 00:39:08.753 "enable_recv_pipe": true, 00:39:08.753 "enable_quickack": false, 00:39:08.753 "enable_placement_id": 0, 00:39:08.753 "enable_zerocopy_send_server": true, 00:39:08.753 "enable_zerocopy_send_client": false, 00:39:08.753 "zerocopy_threshold": 0, 00:39:08.753 "tls_version": 0, 00:39:08.753 "enable_ktls": false 00:39:08.753 } 00:39:08.753 } 00:39:08.753 ] 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "subsystem": "vmd", 00:39:08.753 "config": [] 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "subsystem": "accel", 00:39:08.753 "config": [ 00:39:08.753 { 00:39:08.753 "method": "accel_set_options", 00:39:08.753 "params": { 00:39:08.753 "small_cache_size": 128, 00:39:08.753 "large_cache_size": 16, 00:39:08.753 "task_count": 2048, 00:39:08.753 "sequence_count": 2048, 00:39:08.753 "buf_count": 2048 00:39:08.753 } 00:39:08.753 } 00:39:08.753 ] 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "subsystem": "bdev", 00:39:08.753 "config": [ 00:39:08.753 { 00:39:08.753 "method": "bdev_set_options", 00:39:08.753 "params": { 00:39:08.753 "bdev_io_pool_size": 65535, 00:39:08.753 "bdev_io_cache_size": 256, 00:39:08.753 "bdev_auto_examine": true, 00:39:08.753 "iobuf_small_cache_size": 128, 00:39:08.753 "iobuf_large_cache_size": 16 00:39:08.753 } 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "method": "bdev_raid_set_options", 00:39:08.753 "params": { 00:39:08.753 "process_window_size_kb": 1024 00:39:08.753 } 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "method": "bdev_iscsi_set_options", 00:39:08.753 "params": { 00:39:08.753 "timeout_sec": 30 00:39:08.753 } 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "method": "bdev_nvme_set_options", 00:39:08.753 "params": { 00:39:08.753 "action_on_timeout": "none", 00:39:08.753 "timeout_us": 0, 00:39:08.753 "timeout_admin_us": 0, 00:39:08.753 "keep_alive_timeout_ms": 10000, 00:39:08.753 "arbitration_burst": 0, 00:39:08.753 "low_priority_weight": 0, 00:39:08.753 "medium_priority_weight": 0, 00:39:08.753 "high_priority_weight": 0, 00:39:08.753 "nvme_adminq_poll_period_us": 10000, 00:39:08.753 "nvme_ioq_poll_period_us": 0, 00:39:08.753 "io_queue_requests": 512, 00:39:08.753 "delay_cmd_submit": true, 00:39:08.753 "transport_retry_count": 4, 00:39:08.753 "bdev_retry_count": 3, 00:39:08.753 "transport_ack_timeout": 0, 00:39:08.753 "ctrlr_loss_timeout_sec": 0, 00:39:08.753 "reconnect_delay_sec": 0, 00:39:08.753 "fast_io_fail_timeout_sec": 0, 00:39:08.753 "disable_auto_failback": false, 00:39:08.753 "generate_uuids": false, 00:39:08.753 "transport_tos": 0, 00:39:08.753 "nvme_error_stat": false, 00:39:08.753 "rdma_srq_size": 0, 00:39:08.753 "io_path_stat": false, 00:39:08.753 "allow_accel_sequence": false, 00:39:08.753 "rdma_max_cq_size": 0, 00:39:08.753 "rdma_cm_event_timeout_ms": 0, 00:39:08.753 "dhchap_digests": [ 00:39:08.753 "sha256", 00:39:08.753 "sha384", 00:39:08.753 "sha512" 00:39:08.753 ], 00:39:08.753 "dhchap_dhgroups": [ 00:39:08.753 "null", 00:39:08.753 "ffdhe2048", 00:39:08.753 "ffdhe3072", 00:39:08.753 "ffdhe4096", 00:39:08.753 "ffdhe6144", 00:39:08.753 "ffdhe8192" 00:39:08.753 ] 00:39:08.753 } 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "method": "bdev_nvme_attach_controller", 00:39:08.753 "params": { 00:39:08.753 "name": "nvme0", 00:39:08.753 "trtype": "TCP", 00:39:08.753 "adrfam": "IPv4", 00:39:08.753 "traddr": "127.0.0.1", 00:39:08.753 "trsvcid": "4420", 00:39:08.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:08.753 "prchk_reftag": false, 00:39:08.753 "prchk_guard": false, 00:39:08.753 "ctrlr_loss_timeout_sec": 0, 00:39:08.753 "reconnect_delay_sec": 0, 00:39:08.753 "fast_io_fail_timeout_sec": 0, 00:39:08.753 "psk": "key0", 00:39:08.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:08.753 "hdgst": false, 00:39:08.753 "ddgst": false 00:39:08.753 } 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "method": "bdev_nvme_set_hotplug", 00:39:08.753 "params": { 00:39:08.753 "period_us": 100000, 00:39:08.753 "enable": false 00:39:08.753 } 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "method": "bdev_wait_for_examine" 00:39:08.753 } 00:39:08.753 ] 00:39:08.753 }, 00:39:08.753 { 00:39:08.753 "subsystem": "nbd", 00:39:08.753 "config": [] 00:39:08.753 } 00:39:08.753 ] 00:39:08.753 }' 00:39:08.753 12:28:58 keyring_file -- keyring/file.sh@114 -- # killprocess 1407293 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1407293 ']' 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1407293 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1407293 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1407293' 00:39:08.753 killing process with pid 1407293 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@967 -- # kill 1407293 00:39:08.753 Received shutdown signal, test time was about 1.000000 seconds 00:39:08.753 00:39:08.753 Latency(us) 00:39:08.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.753 =================================================================================================================== 00:39:08.753 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:08.753 12:28:58 keyring_file -- common/autotest_common.sh@972 -- # wait 1407293 00:39:09.012 12:28:58 keyring_file -- keyring/file.sh@117 -- # bperfpid=1408784 00:39:09.012 12:28:58 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1408784 /var/tmp/bperf.sock 00:39:09.012 12:28:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1408784 ']' 00:39:09.013 12:28:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:09.013 12:28:58 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:09.013 12:28:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:09.013 12:28:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:09.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:09.013 12:28:58 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:39:09.013 "subsystems": [ 00:39:09.013 { 00:39:09.013 "subsystem": "keyring", 00:39:09.013 "config": [ 00:39:09.013 { 00:39:09.013 "method": "keyring_file_add_key", 00:39:09.013 "params": { 00:39:09.013 "name": "key0", 00:39:09.013 "path": "/tmp/tmp.dSeEn8IFnx" 00:39:09.013 } 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "method": "keyring_file_add_key", 00:39:09.013 "params": { 00:39:09.013 "name": "key1", 00:39:09.013 "path": "/tmp/tmp.QidEFZwG3W" 00:39:09.013 } 00:39:09.013 } 00:39:09.013 ] 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "subsystem": "iobuf", 00:39:09.013 "config": [ 00:39:09.013 { 00:39:09.013 "method": "iobuf_set_options", 00:39:09.013 "params": { 00:39:09.013 "small_pool_count": 8192, 00:39:09.013 "large_pool_count": 1024, 00:39:09.013 "small_bufsize": 8192, 00:39:09.013 "large_bufsize": 135168 00:39:09.013 } 00:39:09.013 } 00:39:09.013 ] 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "subsystem": "sock", 00:39:09.013 "config": [ 00:39:09.013 { 00:39:09.013 "method": "sock_set_default_impl", 00:39:09.013 "params": { 00:39:09.013 "impl_name": "posix" 00:39:09.013 } 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "method": "sock_impl_set_options", 00:39:09.013 "params": { 00:39:09.013 "impl_name": "ssl", 00:39:09.013 "recv_buf_size": 4096, 00:39:09.013 "send_buf_size": 4096, 00:39:09.013 "enable_recv_pipe": true, 00:39:09.013 "enable_quickack": false, 00:39:09.013 "enable_placement_id": 0, 00:39:09.013 "enable_zerocopy_send_server": true, 00:39:09.013 "enable_zerocopy_send_client": false, 00:39:09.013 "zerocopy_threshold": 0, 00:39:09.013 "tls_version": 0, 00:39:09.013 "enable_ktls": false 00:39:09.013 } 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "method": "sock_impl_set_options", 00:39:09.013 "params": { 00:39:09.013 "impl_name": "posix", 00:39:09.013 "recv_buf_size": 2097152, 00:39:09.013 "send_buf_size": 2097152, 00:39:09.013 "enable_recv_pipe": true, 00:39:09.013 "enable_quickack": false, 00:39:09.013 "enable_placement_id": 0, 00:39:09.013 "enable_zerocopy_send_server": true, 00:39:09.013 "enable_zerocopy_send_client": false, 00:39:09.013 "zerocopy_threshold": 0, 00:39:09.013 "tls_version": 0, 00:39:09.013 "enable_ktls": false 00:39:09.013 } 00:39:09.013 } 00:39:09.013 ] 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "subsystem": "vmd", 00:39:09.013 "config": [] 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "subsystem": "accel", 00:39:09.013 "config": [ 00:39:09.013 { 00:39:09.013 "method": "accel_set_options", 00:39:09.013 "params": { 00:39:09.013 "small_cache_size": 128, 00:39:09.013 "large_cache_size": 16, 00:39:09.013 "task_count": 2048, 00:39:09.013 "sequence_count": 2048, 00:39:09.013 "buf_count": 2048 00:39:09.013 } 00:39:09.013 } 00:39:09.013 ] 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "subsystem": "bdev", 00:39:09.013 "config": [ 00:39:09.013 { 00:39:09.013 "method": "bdev_set_options", 00:39:09.013 "params": { 00:39:09.013 "bdev_io_pool_size": 65535, 00:39:09.013 "bdev_io_cache_size": 256, 00:39:09.013 "bdev_auto_examine": true, 00:39:09.013 "iobuf_small_cache_size": 128, 00:39:09.013 "iobuf_large_cache_size": 16 00:39:09.013 } 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "method": "bdev_raid_set_options", 00:39:09.013 "params": { 00:39:09.013 "process_window_size_kb": 1024 00:39:09.013 } 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "method": "bdev_iscsi_set_options", 00:39:09.013 "params": { 00:39:09.013 "timeout_sec": 30 00:39:09.013 } 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "method": "bdev_nvme_set_options", 00:39:09.013 "params": { 00:39:09.013 "action_on_timeout": "none", 00:39:09.013 "timeout_us": 0, 00:39:09.013 "timeout_admin_us": 0, 00:39:09.013 "keep_alive_timeout_ms": 10000, 00:39:09.013 "arbitration_burst": 0, 00:39:09.013 "low_priority_weight": 0, 00:39:09.013 "medium_priority_weight": 0, 00:39:09.013 "high_priority_weight": 0, 00:39:09.013 "nvme_adminq_poll_period_us": 10000, 00:39:09.013 "nvme_ioq_poll_period_us": 0, 00:39:09.013 "io_queue_requests": 512, 00:39:09.013 "delay_cmd_submit": true, 00:39:09.013 "transport_retry_count": 4, 00:39:09.013 "bdev_retry_count": 3, 00:39:09.013 "transport_ack_timeout": 0, 00:39:09.013 "ctrlr_loss_timeout_sec": 0, 00:39:09.013 "reconnect_delay_sec": 0, 00:39:09.013 "fast_io_fail_timeout_sec": 0, 00:39:09.013 "disable_auto_failback": false, 00:39:09.013 "generate_uuids": false, 00:39:09.013 "transport_tos": 0, 00:39:09.013 "nvme_error_stat": false, 00:39:09.013 "rdma_srq_size": 0, 00:39:09.013 "io_path_stat": false, 00:39:09.013 "allow_accel_sequence": false, 00:39:09.013 "rdma_max_cq_size": 0, 00:39:09.013 "rdma_cm_event_timeout_ms": 0, 00:39:09.013 "dhchap_digests": [ 00:39:09.013 "sha256", 00:39:09.013 "sha384", 00:39:09.013 "sha512" 00:39:09.013 ], 00:39:09.013 "dhchap_dhgroups": [ 00:39:09.013 "null", 00:39:09.013 "ffdhe2048", 00:39:09.013 "ffdhe3072", 00:39:09.013 "ffdhe4096", 00:39:09.013 "ffdhe6144", 00:39:09.013 "ffdhe8192" 00:39:09.013 ] 00:39:09.013 } 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "method": "bdev_nvme_attach_controller", 00:39:09.013 "params": { 00:39:09.013 "name": "nvme0", 00:39:09.013 "trtype": "TCP", 00:39:09.013 "adrfam": "IPv4", 00:39:09.013 "traddr": "127.0.0.1", 00:39:09.013 "trsvcid": "4420", 00:39:09.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:09.013 "prchk_reftag": false, 00:39:09.013 "prchk_guard": false, 00:39:09.013 "ctrlr_loss_timeout_sec": 0, 00:39:09.013 "reconnect_delay_sec": 0, 00:39:09.013 "fast_io_fail_timeout_sec": 0, 00:39:09.013 "psk": "key0", 00:39:09.013 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:09.013 "hdgst": false, 00:39:09.013 "ddgst": false 00:39:09.013 } 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "method": "bdev_nvme_set_hotplug", 00:39:09.013 "params": { 00:39:09.013 "period_us": 100000, 00:39:09.013 "enable": false 00:39:09.013 } 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "method": "bdev_wait_for_examine" 00:39:09.013 } 00:39:09.013 ] 00:39:09.013 }, 00:39:09.013 { 00:39:09.013 "subsystem": "nbd", 00:39:09.013 "config": [] 00:39:09.013 } 00:39:09.013 ] 00:39:09.013 }' 00:39:09.013 12:28:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:09.013 12:28:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:09.013 [2024-07-15 12:28:58.960888] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:39:09.013 [2024-07-15 12:28:58.960936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408784 ] 00:39:09.013 EAL: No free 2048 kB hugepages reported on node 1 00:39:09.273 [2024-07-15 12:28:59.029264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:09.273 [2024-07-15 12:28:59.070215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:09.273 [2024-07-15 12:28:59.224647] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:09.840 12:28:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:09.840 12:28:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:39:09.840 12:28:59 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:39:09.840 12:28:59 keyring_file -- keyring/file.sh@120 -- # jq length 00:39:09.840 12:28:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.098 12:28:59 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:39:10.098 12:28:59 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:39:10.098 12:28:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:10.098 12:28:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.098 12:28:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.098 12:28:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.098 12:28:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:10.357 12:29:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:10.357 12:29:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:39:10.357 12:29:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:10.357 12:29:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:10.357 12:29:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:10.357 12:29:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:10.357 12:29:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:10.357 12:29:00 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:39:10.357 12:29:00 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:39:10.357 12:29:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:10.357 12:29:00 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:39:10.616 12:29:00 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:39:10.616 12:29:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:10.616 12:29:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dSeEn8IFnx /tmp/tmp.QidEFZwG3W 00:39:10.616 12:29:00 keyring_file -- keyring/file.sh@20 -- # killprocess 1408784 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1408784 ']' 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1408784 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1408784 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1408784' 00:39:10.616 killing process with pid 1408784 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@967 -- # kill 1408784 00:39:10.616 Received shutdown signal, test time was about 1.000000 seconds 00:39:10.616 00:39:10.616 Latency(us) 00:39:10.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:10.616 =================================================================================================================== 00:39:10.616 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:10.616 12:29:00 keyring_file -- common/autotest_common.sh@972 -- # wait 1408784 00:39:10.875 12:29:00 keyring_file -- keyring/file.sh@21 -- # killprocess 1407289 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1407289 ']' 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1407289 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1407289 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1407289' 00:39:10.875 killing process with pid 1407289 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@967 -- # kill 1407289 00:39:10.875 [2024-07-15 12:29:00.784413] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:39:10.875 12:29:00 keyring_file -- common/autotest_common.sh@972 -- # wait 1407289 00:39:11.134 00:39:11.134 real 0m10.958s 00:39:11.134 user 0m26.973s 00:39:11.134 sys 0m2.662s 00:39:11.134 12:29:01 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:11.134 12:29:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:11.134 ************************************ 00:39:11.134 END TEST keyring_file 00:39:11.134 ************************************ 00:39:11.134 12:29:01 -- common/autotest_common.sh@1142 -- # return 0 00:39:11.134 12:29:01 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:39:11.134 12:29:01 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:11.134 12:29:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:11.134 12:29:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:11.134 12:29:01 -- common/autotest_common.sh@10 -- # set +x 00:39:11.393 ************************************ 00:39:11.393 START TEST keyring_linux 00:39:11.393 ************************************ 00:39:11.393 12:29:01 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:11.393 * Looking for test storage... 00:39:11.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:11.393 12:29:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:11.393 12:29:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:11.393 12:29:01 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:11.393 12:29:01 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:11.393 12:29:01 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:11.393 12:29:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.393 12:29:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.393 12:29:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.393 12:29:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:11.393 12:29:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:11.393 12:29:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:11.393 12:29:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:11.393 12:29:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:11.393 12:29:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:11.393 12:29:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:11.393 12:29:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:11.393 12:29:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:11.393 12:29:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:11.393 12:29:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:11.393 12:29:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:11.393 12:29:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:11.393 12:29:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:11.393 12:29:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:11.393 12:29:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:11.394 12:29:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:11.394 /tmp/:spdk-test:key0 00:39:11.394 12:29:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:11.394 12:29:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:11.394 12:29:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:39:11.394 12:29:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:11.394 12:29:01 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:11.394 12:29:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:39:11.394 12:29:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:11.394 12:29:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:11.394 /tmp/:spdk-test:key1 00:39:11.394 12:29:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1409133 00:39:11.394 12:29:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1409133 00:39:11.394 12:29:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:11.394 12:29:01 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1409133 ']' 00:39:11.394 12:29:01 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.394 12:29:01 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:11.394 12:29:01 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.394 12:29:01 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:11.394 12:29:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:11.653 [2024-07-15 12:29:01.393416] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:39:11.653 [2024-07-15 12:29:01.393466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409133 ] 00:39:11.653 EAL: No free 2048 kB hugepages reported on node 1 00:39:11.653 [2024-07-15 12:29:01.458096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.653 [2024-07-15 12:29:01.499236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.912 12:29:01 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:11.912 12:29:01 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:39:11.912 12:29:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:11.912 12:29:01 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:11.912 12:29:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:11.912 [2024-07-15 12:29:01.702812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:11.912 null0 00:39:11.912 [2024-07-15 12:29:01.734853] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:11.912 [2024-07-15 12:29:01.735187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:11.912 12:29:01 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:11.912 12:29:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:11.912 2893536 00:39:11.912 12:29:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:11.912 899526658 00:39:11.912 12:29:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1409288 00:39:11.912 12:29:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1409288 /var/tmp/bperf.sock 00:39:11.912 12:29:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:11.912 12:29:01 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1409288 ']' 00:39:11.912 12:29:01 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:11.912 12:29:01 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:11.913 12:29:01 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:11.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:11.913 12:29:01 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:11.913 12:29:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:11.913 [2024-07-15 12:29:01.803403] Starting SPDK v24.09-pre git sha1 2728651ee / DPDK 23.11.0 initialization... 00:39:11.913 [2024-07-15 12:29:01.803455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409288 ] 00:39:11.913 EAL: No free 2048 kB hugepages reported on node 1 00:39:11.913 [2024-07-15 12:29:01.871230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.171 [2024-07-15 12:29:01.912477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:12.171 12:29:01 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:12.171 12:29:01 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:39:12.171 12:29:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:12.171 12:29:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:12.172 12:29:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:12.172 12:29:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:12.431 12:29:02 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:12.431 12:29:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:12.690 [2024-07-15 12:29:02.510407] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:12.690 nvme0n1 00:39:12.690 12:29:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:12.690 12:29:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:12.690 12:29:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:12.690 12:29:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:12.690 12:29:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:12.690 12:29:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:12.949 12:29:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:12.949 12:29:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:12.949 12:29:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:12.949 12:29:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:12.949 12:29:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:12.949 12:29:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:12.949 12:29:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:13.208 12:29:02 keyring_linux -- keyring/linux.sh@25 -- # sn=2893536 00:39:13.208 12:29:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:13.208 12:29:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:13.208 12:29:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 2893536 == \2\8\9\3\5\3\6 ]] 00:39:13.208 12:29:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 2893536 00:39:13.208 12:29:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:13.208 12:29:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:13.208 Running I/O for 1 seconds... 00:39:14.192 00:39:14.192 Latency(us) 00:39:14.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.192 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:14.192 nvme0n1 : 1.01 18458.46 72.10 0.00 0.00 6905.88 5157.40 12309.37 00:39:14.192 =================================================================================================================== 00:39:14.192 Total : 18458.46 72.10 0.00 0.00 6905.88 5157.40 12309.37 00:39:14.192 0 00:39:14.192 12:29:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:14.192 12:29:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:14.460 12:29:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:14.460 12:29:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:14.460 12:29:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:14.460 12:29:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:14.460 12:29:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:14.460 12:29:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:14.719 12:29:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:14.719 12:29:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:14.719 12:29:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:14.719 12:29:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:14.719 12:29:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:14.719 [2024-07-15 12:29:04.634630] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:14.719 [2024-07-15 12:29:04.635509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173dbf0 (107): Transport endpoint is not connected 00:39:14.719 [2024-07-15 12:29:04.636503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173dbf0 (9): Bad file descriptor 00:39:14.719 [2024-07-15 12:29:04.637503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:14.719 [2024-07-15 12:29:04.637515] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:14.719 [2024-07-15 12:29:04.637523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:14.719 request: 00:39:14.719 { 00:39:14.719 "name": "nvme0", 00:39:14.719 "trtype": "tcp", 00:39:14.719 "traddr": "127.0.0.1", 00:39:14.719 "adrfam": "ipv4", 00:39:14.719 "trsvcid": "4420", 00:39:14.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:14.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:14.719 "prchk_reftag": false, 00:39:14.719 "prchk_guard": false, 00:39:14.719 "hdgst": false, 00:39:14.719 "ddgst": false, 00:39:14.719 "psk": ":spdk-test:key1", 00:39:14.719 "method": "bdev_nvme_attach_controller", 00:39:14.719 "req_id": 1 00:39:14.719 } 00:39:14.719 Got JSON-RPC error response 00:39:14.719 response: 00:39:14.719 { 00:39:14.719 "code": -5, 00:39:14.719 "message": "Input/output error" 00:39:14.719 } 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:14.719 12:29:04 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@33 -- # sn=2893536 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 2893536 00:39:14.720 1 links removed 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@33 -- # sn=899526658 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 899526658 00:39:14.720 1 links removed 00:39:14.720 12:29:04 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1409288 00:39:14.720 12:29:04 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1409288 ']' 00:39:14.720 12:29:04 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1409288 00:39:14.720 12:29:04 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:39:14.720 12:29:04 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:14.720 12:29:04 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409288 00:39:14.720 12:29:04 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:39:14.720 12:29:04 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:39:14.720 12:29:04 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409288' 00:39:14.720 killing process with pid 1409288 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@967 -- # kill 1409288 00:39:14.979 Received shutdown signal, test time was about 1.000000 seconds 00:39:14.979 00:39:14.979 Latency(us) 00:39:14.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.979 =================================================================================================================== 00:39:14.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@972 -- # wait 1409288 00:39:14.979 12:29:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1409133 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1409133 ']' 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1409133 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1409133 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1409133' 00:39:14.979 killing process with pid 1409133 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@967 -- # kill 1409133 00:39:14.979 12:29:04 keyring_linux -- common/autotest_common.sh@972 -- # wait 1409133 00:39:15.238 00:39:15.238 real 0m4.080s 00:39:15.238 user 0m7.273s 00:39:15.238 sys 0m1.477s 00:39:15.238 12:29:05 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:15.238 12:29:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:15.238 ************************************ 00:39:15.238 END TEST keyring_linux 00:39:15.238 ************************************ 00:39:15.497 12:29:05 -- common/autotest_common.sh@1142 -- # return 0 00:39:15.497 12:29:05 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:39:15.497 12:29:05 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:39:15.497 12:29:05 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:15.497 12:29:05 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:15.497 12:29:05 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:39:15.497 12:29:05 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:39:15.497 12:29:05 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:39:15.497 12:29:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:15.497 12:29:05 -- common/autotest_common.sh@10 -- # set +x 00:39:15.497 12:29:05 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:39:15.497 12:29:05 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:15.497 12:29:05 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:15.497 12:29:05 -- common/autotest_common.sh@10 -- # set +x 00:39:20.765 INFO: APP EXITING 00:39:20.765 INFO: killing all VMs 00:39:20.765 INFO: killing vhost app 00:39:20.765 INFO: EXIT DONE 00:39:23.300 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:39:23.300 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:39:23.300 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:39:25.834 Cleaning 00:39:25.834 Removing: /var/run/dpdk/spdk0/config 00:39:25.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:25.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:25.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:25.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:25.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:25.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:25.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:25.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:25.834 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:25.834 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:25.834 Removing: /var/run/dpdk/spdk1/config 00:39:25.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:25.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:26.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:26.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:26.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:26.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:26.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:26.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:26.093 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:26.093 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:26.093 Removing: /var/run/dpdk/spdk1/mp_socket 00:39:26.093 Removing: /var/run/dpdk/spdk2/config 00:39:26.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:26.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:26.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:26.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:26.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:26.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:26.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:26.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:26.093 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:26.093 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:26.093 Removing: /var/run/dpdk/spdk3/config 00:39:26.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:26.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:26.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:26.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:26.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:26.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:26.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:26.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:26.093 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:26.093 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:26.093 Removing: /var/run/dpdk/spdk4/config 00:39:26.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:26.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:26.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:26.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:26.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:26.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:26.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:26.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:26.093 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:26.093 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:26.093 Removing: /dev/shm/bdev_svc_trace.1 00:39:26.093 Removing: /dev/shm/nvmf_trace.0 00:39:26.093 Removing: /dev/shm/spdk_tgt_trace.pid942952 00:39:26.093 Removing: /var/run/dpdk/spdk0 00:39:26.093 Removing: /var/run/dpdk/spdk1 00:39:26.093 Removing: /var/run/dpdk/spdk2 00:39:26.093 Removing: /var/run/dpdk/spdk3 00:39:26.093 Removing: /var/run/dpdk/spdk4 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1047834 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1052148 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1061919 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1067096 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1071079 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1071627 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1077764 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1083563 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1083569 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1084480 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1085391 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1086216 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1086779 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1086781 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1087074 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1087331 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1087352 00:39:26.093 Removing: /var/run/dpdk/spdk_pid1088253 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1089374 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1090289 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1090881 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1090974 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1091209 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1092227 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1093203 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1101506 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1101759 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1106012 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1111643 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1114230 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1124398 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1133338 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1135392 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1136317 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1152901 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1156666 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1181660 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1186134 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1187760 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1189369 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1189601 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1189619 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1189840 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1190129 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1191952 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1192722 00:39:26.352 Removing: /var/run/dpdk/spdk_pid1193209 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1195306 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1195795 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1196302 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1200467 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1205923 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1210775 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1247084 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1251418 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1257486 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1258788 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1260236 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1264449 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1268450 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1275667 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1275675 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1280152 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1280388 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1280610 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1281073 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1281078 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1282467 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1284066 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1285706 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1287443 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1289097 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1290693 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1297061 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1297631 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1299371 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1300416 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1306121 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1308654 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1314016 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1319278 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1327615 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1334435 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1334437 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1352911 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1353469 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1353940 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1354443 00:39:26.353 Removing: /var/run/dpdk/spdk_pid1355148 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1355659 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1356313 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1356784 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1360813 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1361044 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1367104 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1367161 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1369370 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1376878 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1376883 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1381904 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1383985 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1386345 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1387387 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1389362 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1390455 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1399151 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1399610 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1400183 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1402532 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1403016 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1403482 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1407289 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1407293 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1408784 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1409133 00:39:26.612 Removing: /var/run/dpdk/spdk_pid1409288 00:39:26.612 Removing: /var/run/dpdk/spdk_pid940822 00:39:26.612 Removing: /var/run/dpdk/spdk_pid941884 00:39:26.612 Removing: /var/run/dpdk/spdk_pid942952 00:39:26.612 Removing: /var/run/dpdk/spdk_pid943579 00:39:26.612 Removing: /var/run/dpdk/spdk_pid944485 00:39:26.612 Removing: /var/run/dpdk/spdk_pid944545 00:39:26.612 Removing: /var/run/dpdk/spdk_pid945516 00:39:26.612 Removing: /var/run/dpdk/spdk_pid945615 00:39:26.612 Removing: /var/run/dpdk/spdk_pid945864 00:39:26.612 Removing: /var/run/dpdk/spdk_pid947384 00:39:26.612 Removing: /var/run/dpdk/spdk_pid948641 00:39:26.612 Removing: /var/run/dpdk/spdk_pid948924 00:39:26.612 Removing: /var/run/dpdk/spdk_pid949211 00:39:26.612 Removing: /var/run/dpdk/spdk_pid949505 00:39:26.612 Removing: /var/run/dpdk/spdk_pid949799 00:39:26.612 Removing: /var/run/dpdk/spdk_pid950050 00:39:26.612 Removing: /var/run/dpdk/spdk_pid950298 00:39:26.612 Removing: /var/run/dpdk/spdk_pid950570 00:39:26.612 Removing: /var/run/dpdk/spdk_pid951126 00:39:26.612 Removing: /var/run/dpdk/spdk_pid954083 00:39:26.612 Removing: /var/run/dpdk/spdk_pid954342 00:39:26.612 Removing: /var/run/dpdk/spdk_pid954598 00:39:26.612 Removing: /var/run/dpdk/spdk_pid954827 00:39:26.612 Removing: /var/run/dpdk/spdk_pid955100 00:39:26.612 Removing: /var/run/dpdk/spdk_pid955246 00:39:26.612 Removing: /var/run/dpdk/spdk_pid955601 00:39:26.612 Removing: /var/run/dpdk/spdk_pid955703 00:39:26.612 Removing: /var/run/dpdk/spdk_pid956051 00:39:26.612 Removing: /var/run/dpdk/spdk_pid956088 00:39:26.612 Removing: /var/run/dpdk/spdk_pid956348 00:39:26.612 Removing: /var/run/dpdk/spdk_pid956364 00:39:26.612 Removing: /var/run/dpdk/spdk_pid956786 00:39:26.612 Removing: /var/run/dpdk/spdk_pid956961 00:39:26.612 Removing: /var/run/dpdk/spdk_pid957245 00:39:26.612 Removing: /var/run/dpdk/spdk_pid957524 00:39:26.612 Removing: /var/run/dpdk/spdk_pid957744 00:39:26.612 Removing: /var/run/dpdk/spdk_pid957809 00:39:26.612 Removing: /var/run/dpdk/spdk_pid958059 00:39:26.612 Removing: /var/run/dpdk/spdk_pid958307 00:39:26.612 Removing: /var/run/dpdk/spdk_pid958561 00:39:26.612 Removing: /var/run/dpdk/spdk_pid958809 00:39:26.612 Removing: /var/run/dpdk/spdk_pid959055 00:39:26.871 Removing: /var/run/dpdk/spdk_pid959310 00:39:26.871 Removing: /var/run/dpdk/spdk_pid959555 00:39:26.871 Removing: /var/run/dpdk/spdk_pid959811 00:39:26.871 Removing: /var/run/dpdk/spdk_pid960056 00:39:26.871 Removing: /var/run/dpdk/spdk_pid960303 00:39:26.872 Removing: /var/run/dpdk/spdk_pid960554 00:39:26.872 Removing: /var/run/dpdk/spdk_pid960808 00:39:26.872 Removing: /var/run/dpdk/spdk_pid961053 00:39:26.872 Removing: /var/run/dpdk/spdk_pid961339 00:39:26.872 Removing: /var/run/dpdk/spdk_pid961663 00:39:26.872 Removing: /var/run/dpdk/spdk_pid961927 00:39:26.872 Removing: /var/run/dpdk/spdk_pid962181 00:39:26.872 Removing: /var/run/dpdk/spdk_pid962430 00:39:26.872 Removing: /var/run/dpdk/spdk_pid962679 00:39:26.872 Removing: /var/run/dpdk/spdk_pid963100 00:39:26.872 Removing: /var/run/dpdk/spdk_pid963336 00:39:26.872 Removing: /var/run/dpdk/spdk_pid963688 00:39:26.872 Removing: /var/run/dpdk/spdk_pid967339 00:39:26.872 Clean 00:39:26.872 12:29:16 -- common/autotest_common.sh@1451 -- # return 0 00:39:26.872 12:29:16 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:39:26.872 12:29:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:26.872 12:29:16 -- common/autotest_common.sh@10 -- # set +x 00:39:26.872 12:29:16 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:39:26.872 12:29:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:26.872 12:29:16 -- common/autotest_common.sh@10 -- # set +x 00:39:26.872 12:29:16 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:26.872 12:29:16 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:26.872 12:29:16 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:26.872 12:29:16 -- spdk/autotest.sh@391 -- # hash lcov 00:39:26.872 12:29:16 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:39:26.872 12:29:16 -- spdk/autotest.sh@393 -- # hostname 00:39:26.872 12:29:16 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:27.131 geninfo: WARNING: invalid characters removed from testname! 00:39:49.061 12:29:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:49.626 12:29:39 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:51.609 12:29:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:53.513 12:29:43 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:55.416 12:29:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:56.820 12:29:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:58.723 12:29:48 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:58.723 12:29:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:58.723 12:29:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:58.723 12:29:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:58.723 12:29:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:58.723 12:29:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.723 12:29:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.723 12:29:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.723 12:29:48 -- paths/export.sh@5 -- $ export PATH 00:39:58.723 12:29:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:58.723 12:29:48 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:58.723 12:29:48 -- common/autobuild_common.sh@444 -- $ date +%s 00:39:58.723 12:29:48 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721039388.XXXXXX 00:39:58.723 12:29:48 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721039388.JfLv6R 00:39:58.723 12:29:48 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:39:58.723 12:29:48 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:39:58.723 12:29:48 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:39:58.723 12:29:48 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:39:58.723 12:29:48 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:58.723 12:29:48 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:58.723 12:29:48 -- common/autobuild_common.sh@460 -- $ get_config_params 00:39:58.723 12:29:48 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:39:58.723 12:29:48 -- common/autotest_common.sh@10 -- $ set +x 00:39:58.723 12:29:48 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:39:58.723 12:29:48 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:39:58.723 12:29:48 -- pm/common@17 -- $ local monitor 00:39:58.723 12:29:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:58.723 12:29:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:58.723 12:29:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:58.724 12:29:48 -- pm/common@21 -- $ date +%s 00:39:58.724 12:29:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:58.724 12:29:48 -- pm/common@21 -- $ date +%s 00:39:58.724 12:29:48 -- pm/common@25 -- $ sleep 1 00:39:58.724 12:29:48 -- pm/common@21 -- $ date +%s 00:39:58.724 12:29:48 -- pm/common@21 -- $ date +%s 00:39:58.724 12:29:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721039388 00:39:58.724 12:29:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721039388 00:39:58.724 12:29:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721039388 00:39:58.724 12:29:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721039388 00:39:58.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721039388_collect-vmstat.pm.log 00:39:58.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721039388_collect-cpu-load.pm.log 00:39:58.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721039388_collect-cpu-temp.pm.log 00:39:58.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721039388_collect-bmc-pm.bmc.pm.log 00:39:59.921 12:29:49 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:39:59.921 12:29:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:39:59.921 12:29:49 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:59.921 12:29:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:59.921 12:29:49 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:39:59.921 12:29:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:59.921 12:29:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:59.921 12:29:49 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:59.921 12:29:49 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:59.921 12:29:49 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:59.921 12:29:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:59.921 12:29:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:59.921 12:29:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:59.921 12:29:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:59.921 12:29:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.921 12:29:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:59.921 12:29:49 -- pm/common@44 -- $ pid=1420228 00:39:59.921 12:29:49 -- pm/common@50 -- $ kill -TERM 1420228 00:39:59.921 12:29:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.921 12:29:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:59.921 12:29:49 -- pm/common@44 -- $ pid=1420230 00:39:59.921 12:29:49 -- pm/common@50 -- $ kill -TERM 1420230 00:39:59.921 12:29:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.921 12:29:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:59.921 12:29:49 -- pm/common@44 -- $ pid=1420232 00:39:59.921 12:29:49 -- pm/common@50 -- $ kill -TERM 1420232 00:39:59.921 12:29:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:59.921 12:29:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:59.921 12:29:49 -- pm/common@44 -- $ pid=1420254 00:39:59.921 12:29:49 -- pm/common@50 -- $ sudo -E kill -TERM 1420254 00:39:59.921 + [[ -n 821970 ]] 00:39:59.921 + sudo kill 821970 00:39:59.930 [Pipeline] } 00:39:59.948 [Pipeline] // stage 00:39:59.954 [Pipeline] } 00:39:59.972 [Pipeline] // timeout 00:39:59.978 [Pipeline] } 00:39:59.995 [Pipeline] // catchError 00:40:00.000 [Pipeline] } 00:40:00.019 [Pipeline] // wrap 00:40:00.025 [Pipeline] } 00:40:00.041 [Pipeline] // catchError 00:40:00.050 [Pipeline] stage 00:40:00.052 [Pipeline] { (Epilogue) 00:40:00.068 [Pipeline] catchError 00:40:00.070 [Pipeline] { 00:40:00.085 [Pipeline] echo 00:40:00.086 Cleanup processes 00:40:00.092 [Pipeline] sh 00:40:00.375 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:00.375 1420342 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:40:00.375 1420627 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:00.389 [Pipeline] sh 00:40:00.670 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:00.670 ++ grep -v 'sudo pgrep' 00:40:00.670 ++ awk '{print $1}' 00:40:00.670 + sudo kill -9 1420342 00:40:00.682 [Pipeline] sh 00:40:00.968 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:10.957 [Pipeline] sh 00:40:11.240 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:11.240 Artifacts sizes are good 00:40:11.252 [Pipeline] archiveArtifacts 00:40:11.258 Archiving artifacts 00:40:11.465 [Pipeline] sh 00:40:11.764 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:11.778 [Pipeline] cleanWs 00:40:11.787 [WS-CLEANUP] Deleting project workspace... 00:40:11.787 [WS-CLEANUP] Deferred wipeout is used... 00:40:11.793 [WS-CLEANUP] done 00:40:11.794 [Pipeline] } 00:40:11.813 [Pipeline] // catchError 00:40:11.825 [Pipeline] sh 00:40:12.108 + logger -p user.info -t JENKINS-CI 00:40:12.116 [Pipeline] } 00:40:12.133 [Pipeline] // stage 00:40:12.139 [Pipeline] } 00:40:12.154 [Pipeline] // node 00:40:12.159 [Pipeline] End of Pipeline 00:40:12.200 Finished: SUCCESS